Wednesday, March 26, 2014

Leiningen upgrade problem

After working flawlessly for a long time, my local Leiningen script suddenly failed to upgrade Leiningen to the latest version. Instead, it spit out the following error message:

The script at /usr/local/bin/lein will be upgraded to the latest stable version.
Do you want to continue [Y/n]? y

Upgrading...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   137  100   137    0     0    121      0  0:00:01  0:00:01 --:--:--   182
curl: (35) error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112)
Failed to download https://github.com/technomancy/leiningen/raw/stable/bin/lein
It's possible your HTTP client's certificate store does not have the
correct certificate authority needed. This is often caused by an
out-of-date version of libssl. Either upgrade it or set HTTP_CLIENT
to turn off certificate checks:
  export HTTP_CLIENT="wget --no-check-certificate -O" # or
  export HTTP_CLIENT="curl --insecure -f -L -o"
It's also possible that you're behind a firewall haven't yet
set HTTP_PROXY and HTTPS_PROXY.

The Leiningen script seems to be (incorrectly) guessing at the problem, which had me spinning my wheels for a short time. The solution I found online was to add a '-3' or '--sslv3' flag to the 'curl' command within the lein script. The final modified line (as of the date of this post) is:

        HTTP_CLIENT="curl $CURL_PROXY --sslv3 -f -L -o"

Tuesday, November 13, 2012

Just in Time for the End of the World: a Mayan Calendar Generation Program

OK, just kidding...my wife and I don't really believe that the world
is coming to an end at the end of the current Mayan Great Cycle.
And to demonstrate our optimism that we'll still be here after
December 21st of this year, we are finally releasing the first
version of our Mayan calendar generation program (written in
Clojure, of course).  Version 1.0.0 is available on GitHub at
https://github.com/hickst/mayancal.

The Mayancal program generates a PDF file containing an illustrated
Mayan calendar for the specified year (the default is 2012...just in
case). The earliest year available is 1900, but this should allow
most users to generate a calendar for their birth year.

The Maya developed a sophisticated calendar based on the
intersection of various cycles, especially the 260 day Tzolkin
(ritual calendar) and the 365 day Haab (a rough solar calendar). The
Tzolkin named each day; like our days of the week. There were 20 day
names, each represented by a unique symbol. The days were also
numbered from 1 to 13 (Trecena cycle). Since there were 20 day
names, after the count of thirteen was reached the next day was
numbered 1 again. Since 13 and 20 have no common divisors, this
system uniquely represents all 260 (13*20) days of the sacred year
with a unique number and day-name combination.

The Haab was a rough solar year of 365 days. The Haab year contained
named months called Uinals. These were 18 regular months of 20 days
each and one special five-day month called Uayeb. Days of the Haab
months were numbered 0 to 19. Each day had a number and day name
from the 260-day Tzolkin as well as a number for each day of the
Haab month (Veintena cycle). Using the intersections of these
cycles, each day can be identified by a four-tuple:
[Tzolkin number, Tzolkin day, Haab number, Haab month].

Using Clojure's infinite lazy sequences these interacting cycles can
easily be generated and combined together with an infinite Gregorian
date sequence (see the mcal.clj file in the source code). To
simplify the program, all sequences are synchronized to start at the
Gregorian date of 1/1/1900. Any given point is then found by
dropping the appropriate number of elements from the heads of the
synchronized sequences.

Some miscellaneous notes: Because of the many great public domain
images and icons used to illustrate the calendar, the output PDF
file tends to be rather large so you probably don't want to email
calendars to all your relatives for Christmas. Also, we've tried
viewing the calendar in Preview 5.5.3, Adobe Reader 10.0.1, Skim
1.3.22, and GoodReader for iPad 3.18.6 and we've seen a few
differences between these viewers. Only Adobe and GoodReader, for
example, were able to follow the embedded links and GoodReader had
trouble displaying the links on the last page.

As I said, this is version 1, so if you encounter any issues please
let us know. We hope that this toy provides you with some fun and
diversion from the more serious, real-world uses of Clojure.

Thursday, September 6, 2012

EIPs in ROC

Randy Kahle has recently written a blog entry which artfully expresses the thinking behind the yearnings for a new ROC (Resource Oriented Computing) language. I suspect that most ROC users would be happy just to see a better syntax for module creation but, as Randy points out, a ROC language should provide a better way to conceptualize the information flow through the system.

I've thought for some time now that what's missing in NetKernel is an abstraction of the higher level patterns of information flow through spaces; an implementation of EIPs (Enterprise Integration Patterns) for ROC. EIPs in ROC could be made manifest in a couple of ways, the simplest being to express the pattern as a particular composition of existing space elements. This approach, however, strikes me as analogous to expressing a pattern in an assembly language without macros: too detailed and hard to repeat correctly. A more powerful solution would be to express some of the simpler EIPs in ROC by encoding them directly as new types of overlays. The most recent overlays, such as the Pluggable overlay and the Branch-Merge overlay, seem to be attempts to encapsulate such patterns of usage. Sadly, these fall far short of the simplicity and beauty that they could have if they were not mired in the grammar-less verbosity of XML. (1)

Please note that, by saying "patterns" (inherent in EIP above) I mean something more abstract and more encompassing than the existing recipes for the connection and interaction of spaces, which are labeled as "Module / Space Patterns" in the NetKernel documentation. While these recipes are a step in the right direction, their level of granularity seems too small to express anything but the simplest EIP. In addition, their descriptions focus on the mechanics of space interconnection and it's very hard to elicit how they can be composed into EIPs.

A new ROC language built around Enterprise Integration Patterns would allow ROC programmers to concentrate on solving their application problems by focusing on the high-level, logical flow of information through the system. Such a language would include, at a minimum, the ability to compose, connect, and visualize some base set of EIPs.  An additional ability to implement arbitrary EIPs would be extremely powerful but might require that the existing facilities to build modules and factories be enhanced, simplified, and canonized into a clean API (re: this forum discussion fragment). If the new ROC language were to eschew XML and to rely on a simple syntax, I feel this would be a huge win. Finally, I believe that a new language built around EIPs would greatly contribute to the usability and adoption of NetKernel and ROC.


1. It's interesting to speculate on the reasons why there are not more higher-level patterns and why they are not easy to spot in NetKernel's standard module. I think the principle of Linguistic Relativity is at work here; the idea that the structure of a language affects the ways in which its users conceptualize their world.


Friday, March 9, 2012

Why natural languages have grammars

This occurred to me after reading several NetKernel forum entries crying out for help with NK's "declarative syntax". I think there are some great ideas in NetKernel but the idea of burying your programming language within XML is not one of them.
<sentence>
<np>
<verb type="gerund" subject="true">Programming</verb>
<pp>
<preposition>in</preposition>
<noun type="proper" acronym="true">XML</noun>
</pp>
</np>
<vp>
<verb tense="present">is</verb>
</vp>
<np>
<determiner singular="true" indefinite="true">a</determiner>
<noun>pain</noun>
<pp>
<preposition>in</preposition>
<determiner definite="true">the</determiner>
<noun>ass</noun>
</pp>
</np>
</sentence>

Friday, May 27, 2011

Keep an SSH connection alive

I'm reposting this post from Cosmin Stejerean (offbytwo.com) as a reminder to myself about how to solve this problem that's plagued me for years when connecting to the U via SSH.

'If you are having problems with your SSH connection getting dropped after a certain amount of time (usually caused by NAT firewalls and home routers), you can use the following setting to keep your connection alive


Host *
ServerAliveInterval 180

You can place this either in ~/.ssh/config for user level settings or in /etc/ssh/ssh_config for machine level settings. You may also replace * with a specific hostname or something like *.example.com to use on all machines within a domain. This is the cleanest way of making sure your connections stay up and doesn’t require changes to the destination servers (over which you may not have control)."

Monday, May 16, 2011

Out of the blue

I got this (probably unintentionally) hilarious spam from O'Reilly, which I first took for a very late April Fool's joke:



Given this photo, I'd say that "small pharmaceutical company" must be one of those medical marijuana boutiques in California.

Wednesday, June 16, 2010

Manning makes good on cancelled book

An update to my previous post about Manning: they've now officially cancelled the CouchDB in Action book. To their credit, they are taking good care of customers (like me) who had already ordered the MEAP edition. We have been offered the choice of (1) getting our money back or (2) a replacement book or eBook (depending on our original order) AND another eBook free. I am very happy with this arrangement and have already taken the replacement offer for other eBooks. I did, however, check the eBooks' starting and (projected) publication dates. I picked books which had at least 4 or 5 chapters already available and I made sure that the book was being actively worked upon. Manning seems to have several books that have drifted off into the figurative weeds (for example see Taming Text, started in June 2008!)

Saturday, May 22, 2010

Obama and the Oil Spill

Great op-ed piece in the NY Times about how Obama is blowing the opportunity to use the Gulf oil disaster to lead the country to real, long-range solutions (which would help prevent disasters like this in the future):

Obama and the Oil Spill

Thursday, April 22, 2010

Manning going downhill fast

I'm becoming more and more disappointed with Manning Publications. They used to be a great source of eBooks on cutting edge technologies by leaders in the field. They are the publisher for some of the leading tech reference. Books such as Spring In Action, Groovy In Action and Ant In Action are tech "classics".

Lately, however, I've noticed that their book times are greatly increasing, author quality is decreasing, authors are unknown in the community, books are being threatened with cancellation, there is more and more advertising of "vaporware" (books with only 1 or 2 small chapters), and eBook releases are poorly screened for even minimal formatting quality.

For examples:

Just days ago I received an email from Manning describing how the authors of Couch DB in Action have fallen so far behind in their progress that the content is already out-of-date. Manning is debating whether to proceed with the existing content, entirely rewrite it or cancel it. There is no mention of what happens to customers who purchased the early access (MEAP) version (as I did) if the book is cancelled.

Then this morning, I received an update to Spring Integration in Action, usually a good and welcome thing. Unfortunately, the formatting of this version has some serious problems that were not present in the previous version. The text size varies wildly from chapter to chapter and, in those chapters were it is greatly increased, many of the figures are obscuring adjacent text and several of the figures are just not visible at all.

Now, of course, it must be acknowledged that this is an Early Access version of the eBook and various formatting, font, and figure problems must be expected for these drafts. However, to be useful at all, there must be some minimal standard of readability; which there was in the first MEAP version of the eBook that I received. The loss of this basic readability in the update gives the impression that no one is even reviewing the product before releasing it.

And, finally, to add insult to injury, Manning sent me a link to an online survey asking frequent customers for feedback. I patiently and completely filled out the form but when I tried to submit it, it claimed that I had not answered a couple of questions and refused to take my submission. Rechecking the form showed that all questions had been completely answered! Manning remains completely oblivious to my disappointments with them.

Manning used to be great but, in my opinion, they are going downhill fast!

Sunday, February 7, 2010

99 Problems In Clojure - part 1: problems 1-17+

Recently, I ran across an interesting blog by someone with the username 'wmacgyver' (real name Mac Liaw?) who was translating and adapting a set of 99 Prolog exercises into Clojure.

Like wmacgyver, I played with the first 17 (or so) exercises and have posted a gist of my answers here. I intend (hope) to continue working on the rest whenever I can get some time.

wmacgyer's blog with the exercise post can be found here.

Sunday, January 24, 2010

Recent Clojure talk at the Tucson JUG

A couple weeks ago I gave the monthly presentation at the Tucson JUG on the programming language Clojure. Only six JUG members showed up but they were very interested in the language and kept me talking for over an hour beyond my initially allotted hour.

There are many reasons why Clojure has really caught on in the last year or so. For me, it's a well-designed and pragmatic amalgam of Lisp, concurrent techniques, and functional programming built on the JVM. It also helps that there's a great book, a friendly and helpful community, and dozens of enthusiastic side projects.

My slides (as a PDF file) are available in the Tucson JUG's Google group area:
http://tinyurl.com/yjrnh55

Note that my presentation relied on material from the Clojure community, including the website, forums, and the terrific book "Programming Clojure" by Stuart Halloway.

Wednesday, December 31, 2008

Matte Matters: I couldn't have said it better.

A recent MacWorld editorial by Rob Griffiths on Apple's glassy displays captures my feelings on this giant step backwards in display technology:

Matte Matters

Just look around you when you're at your office, favorite coffee shop, in an airplane seat, a classroom, or even a library. There are almost always glaring light sources above and behind you. What were Apple's designers thinking? Don't they use their own products?

Monday, October 20, 2008

Final article in series finally published

I forgot to mention that the final article in the 4-part series written by Randy Kahle and me was published a few weeks ago on TheServerSide.com:

A RESTful Core for Web-like Application Flexibility - Part 4

The final article contains links back to the previous articles in the series. Just scroll down the the References section at the very bottom of the article.

Abbreviations and Code Readability

James Leigh, in a recent blog post, makes a couple of good comments on the importance of code readability and the presence of redundancy.

The accompanying poll question, however, (which asks if easily readable code is important) begs the deeper question...of course easily readable code is extremely important but the real question is how to achieve it.

For example, using abbreviations in identifier names is a poor way to make the names shorter and more concise.

Abbreviated names suffer several problems including:

1) ambiguity: is 'getReq' short for getRequest, getRequirement, or getRequisition?

2) cognitive burden: abbreviations requires much more mental effort to remember which fragment of a word is being employed. This "ideolexical" design makes the API seem much more complex and daunting than it should.

As an example, is the abbreviation for 'declareDescription' going to be:

declareDescript,
declareDescrip,
declareDescr,
declareDesc,
declDescrip,
declDescr,
OR
declDesc?

3) lack of consistency: even with only one programmer creating the abbreviated identifier names, it seems highly probably that inconsistencies will creep into the naming scheme, making it harder to use.

4) loss of readability and documentation: longer names are often clearer and document the code better than abbreviations (or shorter names).

In these days of IDEs there is little reason not to use longer, clearer, self-documenting names: it is trivial to start a name and then hit the appropriate completion key. Even if you program in a non-IDE (as I do....I use Emacs a lot of the time) the importance of good names as documentation cannot be over-emphasized and is well worth a tiny bit of extra typing.

Thursday, October 9, 2008

Lisp turns 50 this month

"In October 1958, John McCarthy published one in a series of reports about his then ongoing effort for designing a new programming language that would be especially suited for achieving artificial intelligence. That report was the first one to use the name LISP for this new programming language. 50 years later, Lisp is still in use. This year we are celebrating Lisp's 50th birthday."
-from the Lisp50@OOPSLA web page

Thursday, September 11, 2008

How the Terrorists Won on 9/11 and Since Then

On this 7th anniversary of 9/11 it is time to finally admit
what most Americans already know in their hearts: that the
terrorists fully achieved their objectives, even beyond
their own wildest dreams. And, since then, unscrupulous men,
corporations, and our own government have helped the
terrorists to continue their success.

Seven years after 9/11, our country has turned against its
own ideals and principles in the name of security, while
ironically justifying its actions as preserving "freedom".

Seven years after 9/11, We are saddled with a costly and
pointless war against a country which had nothing to do with
the attacks. We have a massive new homeland security
bureaucracy which is hard at work trying to impose a
mandatory national ID system. The privacy of millions of
American phone conversations and emails have been secretly
and illegally violated by the government and submissive
corporations. Citizens of other countries have been
arbitrarily labeled as terrorists and jailed for years
without formal charges or a trial. Freedom of travel has
been restricted by secret and erroneous government "watch
lists", to which there is no judicial appeal. And through it
all, government agencies, such as the INS and DHS, have
simply declared themselves to have sweeping new powers.

Joseph Stalin is reputed to have said "When we hang the
capitalists they will sell us the rope" but the terrorists
of 9/11 have turned us against ourselves in a much more
insidious manner; they played upon our fear of death. And
the possibility of death by terrorism is being exaggerated
by those within our country who seek to maintain or expand
their power. The truth is that you are thousands of times
more likely to be killed by a traffic accident than by a
swarthy foreigner with a bomb. Over a quarter of a million
people have died in traffic accidents in the U.S. since 9/11
and yet there is no massive new Department of Automotive
Security.

The terrorists of 9/11 won by instilling fear in the
populace, causing us to give up some of our fundamental
freedoms and rights. It is time to awake from our long
national nightmare and to put our fears into perspective.
It is time to stop being overly afraid and to reject the
erosion of our hard-won liberties. It is time to stop
letting the terrorists win.

Saturday, August 16, 2008

Data Duck Typing

Some friends of mine at 1060 Research recently sent me a new version of some software they are working on. After reading one of the XML configuration files, I asked if they had an XML Schema for it (which would define the grammar for legal configurations). The answer was that they did not, as they were moving away from formal grammars and more towards a rule-based approach like Schematron: which uses a set of pattern assertions (rules) for XML validation.

When I thought about this approach, I realized that it is duck typing for data. In object-oriented programming, the use of duck typing means that an object's behavior, rather than its class or inheritance structure, determines its interpretation and usage. The application of rule-based systems to categorize a data file or message is a data-oriented form of duck typing. Using "data duck typing", data is categorized (in this case validated) by having the right elements in the right locations.

Data duck typing means that a data file does not have to fully conform to a specific, rigid grammar as long as some of its parts meet the requirements of the particular rule set used for categorization. Thus, data messages for an application can come in all shapes and sizes as long as they contain the essential required elements with the right structural relationships. Applications which use this approach embody the design principle which says "be lenient in the messages that you accept" and will be much more flexible than applications based on rigid adherence to formal grammars.

crosslink: 5 reasons you don’t really want a jack-of-all-trades developer

Rebecca Murphey has posted an excellent blog entry in which she looks critically at the current employer trend of "asking for the moon":

"5 reasons you don’t really want a jack-of-all-trades developer".

Saturday, July 26, 2008

First article in a series posted at TheServerSide.com

Randy Kahle and I are writing a series of articles on Resource Oriented Computing (ROC), which you can think of as REST principles applied to application software development. The first article of the series posted a couple of days ago on TheServerSide.com under the title

"A RESTful Core for Web-like Application Flexibility - Part 1"

Perhaps this title was somehow misleading since the article immediately engendered a passionate (and not always civil nor complementary) debate on various aspects of REST. Most of these were off-topic from the article. As a long time reader of TSS, I expected that something like this could happen. My attitude toward this is to encourage rational discussion, clarify misunderstood points, and ignore misbehavior. This is, BTW, an approach used successfully to deal with patients at mental hospitals.

Friday, July 18, 2008

Google's Protocol Buffers announcement

Nick L., at Google, recently sent the Tucson JUG a link to a blog posting about Google's newly open-sourced Protocol Buffers:

http://google-opensource.blogspot.com/2008/07/protocol-buffers-googles-data.html

"Interesting", I thought, "but why didn't you guys just use CORBA and get it over with?"

A snippet in the blog post seems to have anticipated that question:
"OK, I know what you're thinking: "Yet another IDL?" Yes, you could call it that. But, IDLs in general have earned a reputation for being hopelessly complicated."
Complexity sounds like a strawman here.....the major problem with IDLs is that they are built upon a shared definition, which requires all parties to update and recompile when the definition changes. And once you recompile, you've lost the ability of the system to handle the old message format (so versioning is a serious problem unless you plan for it from the beginning).

Of course, these problems are ameliorated when:
1) the IDL is for internal use only and,
2) you control both ends of the conversation,
as Google does...er...did up until now.

I also wonder why Google didn't just use an existing protocol like Hessian:

http://caucho.com/products/hessian.xtp

Perhaps a case of NIH syndrome? (http://en.wikipedia.org/wiki/NIH_syndrome)

Update - 7/20/2008:

Nick responded with these comments:
Google does tend to favor technology we invent ourselves ...[snip]... OTOH, some of the systems we've built ourselves have been blockbuster hits that enable much of what you know as "Google" today.

In response to your comment about backward compatibility, Protocol Buffers are actually explicitly designed so that you can add fields and whatnot and still be able to read in records stored in the old format.

I have to admit, the "WOW" factor on some of Google's software has inspired competition and innovation. So, the next time I'm looking for a binary wire protocol, I'll take a harder look at Protocol Buffers.