Tuesday, November 8, 2011

No really, it's a revenue problem.

We have a spending problem in Washington not a revenue problem. That is one of the most common refrains uttered by virtually every neoconservative in the past 30 years. Is it correct? Does our government irresponsibly squander ample revenue? Perhaps in some cases, but largely I don't think that explains why we consistently have a budget deficit or a massive Federal debt. One of the direct causes for both is simply that US citizens are paying less taxes than ever. In this post, I will demonstrate two truths about taxes that are commonly misunderstood. I will go on to argue that federal taxes are important to society and that further decreasing tax revenue will realize consequences most people do not want.

  1. Across the board, everyone is paying less tax than at any other post-war time.
  2. Wealthy people especially are being taxed significantly less than at any other time.

To prove that people are being taxed less per year, I used tax bracket data found at The Tax Foundation. With that data, I wrote a small Python script to calculate marginal tax in inflation adjusted (Real) dollars. Then I compared the taxes owed by 3 different levels of tax payer. The first was someone whose Adjusted Gross Income (AGI) is $49,445. That number was the median individual income for 2010. The second income I chose was $250,000. I selected it because Obama's 2008 campaign used it as a definition of wealth. I also compared someone whose income is $1M since someone making that is part of the 1% protested against by Occupy Wall Street. For each income, I compared what the taxes owed would be or have been in 2011, 2001, 1991, 1981, 1971, 1961, and 1951. All comparisons were done with the individuals filing status as "Single".

As is the definition of Adjusted Gross Income, this comparison is only related to marginal tax after all deductions and exemptions have been calculated.

As is obvious from these charts, across the board, marginal taxes owed have been dropping steadily for years. For the higher income levels, that drop started in earnest around the late 70s to early 80s.

Let's look at our current (2011) tax brackets. They are split into 4 columns.

  1. Married Filing Jointly
  2. Married Filing Separately
  3. Single (This is what I am using for comparison)
  4. Head of the Household

These 4 columns have been around in their current form since 1971. Within each column are six brackets. Here's what the Single column brackets look like:

  • 10% > $0
  • 15% > $8,500
  • 25% > $34,500
  • 28% > $69,675
  • 33% > $106,150
  • 35% > $189,575

So your rates go up for each dollar you make above a bracketed amount. Our current 6 bracket system is progressive (in other words, not flat). However, in the past we have been a lot more progressive. Here's what the Single column brackets looked like in 1969.

  • 14% > $0
  • 15% > $3,057
  • 16% > $6,114
  • 17% > $9,171
  • 19% > $12,228
  • 22% > $24,456
  • 25% > $36,683
  • 28% > $48,911
  • 32% > $61,139
  • 36% > $73,367
  • 39% > $85,594
  • 42% > $97,822
  • 45% > $110,050
  • 48% > $1222,278
  • 50% > $134,505
  • 53% > $158,961
  • 55% > $195,644
  • 58% > $232,328
  • 60% > $269,011
  • 62% > $305,694
  • 64% > $366,833
  • 66% > $427,972
  • 68% > $489,11
  • 69% > $550,250
  • 70% > $611,389

Whoa, that's a lot of brackets.

In 2011, the highest marginal tax rate is 35%. That rate was set as part of the infamous Bush Tax Cuts which are set to end in 2011. In 2012, the rate will move to 39.8%. Still, 40% is significantly lower than the 70% we were paying in 1969. The highest rate we've ever had was 94% in 1945. Of course that was only on income greater than $2.4M (Real) which was $200K in 1945 nominal dollars. Since then, we have been moving increasingly towards a flatter tax structure. This trend has accelerated in past 30 years as conservative economics have become the dominant ideology.

Why are the neocons so in love with flat taxes? One answer is they want a tax code everyone can understand. Another is that they want to justify a way for the wealthy to pay less tax. Flat taxes justify that under the guise of fairness. In fact, often flat taxes are called fair taxes. Ostensibly, they are called that because they do not go up or down depending on your income level. However, effectively a flat tax will raise taxes on the poor and lower taxes on the rich. This is because any flat tax rate above the lowest progressive tax bracket will raise taxes on people under that bracket. Flat taxes also have the undesired effect1 of significantly lowering government revenue. This is because wealthy people under a flat tax will pay taxes on most of their income at a significantly lower rate. Herman Cain's 9-9-9 plan was laughably bad in this regard. The Washington Post presented a chart which shows that 9-9-9 would raise taxes for most people and significantly lower them for the richest few. Of course, part of Cain's plan involved a national sales tax which would also have the effect of raising the cost of living by 9%. Actually, the three 9s combined would raise the cost of living for the poorest Americans by 27%.

So what's the problem with lowering the tax rate for the wealthy? Another common conservative argument is that paradoxically lowering the tax rate for the wealthy raises tax revenue. The theory goes this way: Instead of collecting a tax dollar from the rich, allow the rich to invest that dollar into the economy. This dollar investment could be turned into two dollars. Thus, two dollars will produce more tax revenue than one dollar. Thirty years of data suggests that this doesn't magically happen. This theory2 goes by a variety of names: It has been called supply-side or "trickle down" economics. Later, it was also called "Reagonomics."3 Currently, we are being warned not to tax the "Job Creators."

Allow me to get up on a soapbox. Given conversations I've had or overheard, my experience is that most people are clueless as to why paying less tax often hurts society in the long run. Our current policies have the effect of moving tax revenue to zero while increasingly turning our society in a private ownership society. This comes with a lot of unintended consequences.

For one, private, for profit, entities can only be enjoyed by people with money. So privatizing everything works to exacerbate inequality by stratifying access to resources and services.

Secondly, public money pays for common infrastructure (eg Roads). Private money can not and should not be responsible for infrastructure. There is just too much opportunity for special interest abuse. Consider the Interstate Highway System. It was built between the 1940s and 1960s. At the time, it was the largest public works project in the history of mankind. Everyone benefits from it. Because of it food and goods are cheaper because they can be distributed less expensively. So a modest tax investment can create public infrastructure with lasting economic benefits for everyone.

Thirdly, taxing the middle and upper classes allows for redistribution of wealth. In other words, it can mitigate aristocracies. For example, taxes pay for Medicaid which gives poor people access to healthcare. Devoid of this, poor people could have health concerns preventing them from participating in larger society, bettering themselves, and competing against wealthier people. Taxes can also be used to fix those kind of competitive imbalances.

My theory is that regular people want a balance in society. They want public works. They want lower income people to have a chance at a better life. They just don't understand the government's role in providing these things. I recently overheard coworkers of mine lamenting the condition of our highways. Said one coworker, "With all these people out of work, couldn't they be employed to fix up the highways?". I casually interjected that infrastructure improvements are a big part of the American Jobs Act. Both of these colleagues call themselves conservatives. Both dislike Obama. Yet they unknowingly made a liberal argument. I suspect a lot of conservatives, that is those whose minds haven't been warped by Fox News or Grover Norquist, would make similar arguments. So my theory is that average people know a balance should be struck between laissez-faire capitalism, central planing, and redistribution of wealth. Given the popular narrative that the Federal Government is at best inefficient and at worse incompetent, they are unsure about what role it should play in realizing this balance.

Given our current budget deficit and Federal debt, now is not the time to delude ourselves into thinking we can get something for nothing. Furthermore, since most individuals are paying less tax than ever and since rich individuals are paying significantly less tax than ever, most middle to upper class people can afford to pay more taxes. That is, of course, if you believe that Medicare, Medicaid, VA services, Social Security, public works, and the military are worth paying for. I do.

1 Flat taxes can increase government revenue in cases where compliance is an issue (former eastern bloc countries are an example) and can increase revenue depending on how income is distributed amongst the country, ex: if everyone has a similar income, flat taxes may not decrease revenue when compared with a progressive system.

2 Perhaps the genesis of this theory comes from the Laffer Curve. The original Laffer Curve was symmetrical. That is its peak, or highest tax rate before diminishing returns, was 50%. However, many believe that Laffer is really asymmetrical and its peak is really between 60% or 70%. This is a lot closer to what our marginal rates were in the 1940s - 1960s.

3 A lot of Reagon's actual tax policies (especially in the beginning) were progressive. For example, his administration raised the capital gains tax to 28%

Tuesday, August 30, 2011

backups-mode for emacs

github: backups-mode for emacs

John Siracusa's excellent Ars Technica review of Mac OS X Lion includes a page on Apple's new document model API. The intent of this API is to effectively end the practice of manually saving a document. Instead, applications will auto-save your documents in the manner of Google Docs or iOS applications. There are many scenarios where this will be helpful. John describes a few.

  • The student who writes for an hour without saving and loses everything when the application crashes.
  • The businessman who accidentally saves over the "good" version of a document, then takes it upon himself to independently reinvent version control—poorly—by compulsively saving each new revision of every document under slightly different names.
  • The Mac power user who reflexively selects the "Don't Save" button for one document after another when quitting an application with many open windows, only to accidentally lose the one document that actually had important changes.
  • The father who swears he saved the important document, but can't, for the life of him, remember where it is or what he called it.

Apple will now enable the following experience as written by John.

  • The user does not have to remember to save documents. All work is automatically saved.
  • Closing a document or quitting an application does not require the user to make decisions about unsaved changes.
  • The user does not have to remember to save document changes before causing the document's file to be read by another application (e.g., attaching an open document with unsaved changes to an e-mail).
  • Quitting an application, logging out, or restarting the computer does not mean that all open documents and windows have to be manually re-opened next time.

I will add that file versioning is another aspect of Apple's API.

  • The user can explicitly choose to save a version of the document.
  • Old versions can be found and viewed.
  • Old versions can be reverted to. This saves the current file as a version and switches the chosen backup with the current file.

With backups-mode I've set out to approximate Apple's Document Model idiom in emacs. Here's how I accomplish it.

  • emacs already has an auto-save feature that is turned on by default. Therefore, if emacs crashes, a user can revert from this file that is saved periodically.
  • I've redefined kill-buffer and save-buffers-kill-emacs to automatically save any file-based buffer when closing a file or quiting emacs.
  • I've turned on emacs' version-control and am saving old versions of a file to a central location instead of said file's location.
  • The user can save a version of the file they are editing with the save-version command bound to \C-cv.
  • The user can list all versions with the list-backups command bound to \C-cb.
  • After listing old versions via list-backups, the user is taken to a special backups-mode buffer where they can:
    • View an old version in read-only mode
    • Diff two versions
    • Revert from an old version
backups-mode buffer

Installation, configuration, and usage documentation can be found on github.

Monday, August 29, 2011

Smelly code, smelly code. Why are they writing you?

There are certain times in your career as a developer when you come across code so horrible, it causes you to question your faith in humanity. Such was the case when my boss and I were given the MicroMain Web Request application to install on one of our servers. We couldn't get it work even though we followed the instructions meticulously. We even spoke to one of their support people. Unfortunately, he was not one of their headset hotties. The error message we were getting wasn't too helpful, either:

Format of the initialization string does not conform to specification starting at index 0

This is a generic .net message. The issue was that we couldn't connect to the database. However, our SQL Server hadn't recorded a login attempt from the application. We were thinking it was a configuration issue. The web.config file was valid xml. Nothing looked terribly strange. Thankfully, they also provided us with the source code. Probably an accident. Nevertheless, we went to the function that was causing the error and were horrified by what we saw.

Public Shared Function GetConnectionString() As String
   'Dim sOleDb As String
   'Dim cn As OleDbConnection
   'Dim cmd As OleDbCommand
   'Dim dr As OleDbDataReader
   Dim returnConnString As String
   Dim sMethod, sColumns, sTableName As String
   sColumns = "DataLink, Platform, Login, Password, Server, Database"
   sTableName = "tblzApp"
   sMethod = ""
   If ConfigurationManager.AppSettings.Count > 0 Then
       sMethod = ConfigurationManager.AppSettings.GetKey(0)
   End If
       Select Case sMethod
           Case "ConnectionString"
               returnConnString = ConfigurationManager.AppSettings(sMethod).ToString
           Case Else
               Return sMethod
       End Select
       Return returnConnString
   Catch ex As Exception
       ex.Source &= "<br /> DAL.GetConnectionString"
       Throw ex
   End Try
End Function

Just for clarification, this is a 23 line function that should be a 1 line function. Here is what the body should look like:

Return ConfigurationManager.ConnectionStrings("MicroMain")

The beauty of this 1 line function is that if it fails, you know exactly why it failed. It fails fast. Conversely, their code fails slowly. In other words, it failed but didn't tell us why.

In addition to that, I happened upon at least 9 different code smells within this single function.

Code Smells within GetConnectionString

  1. getting a key/value item by position instead of name (appSettings is a key/value store)

    This is the equivalent of being a high school principal and walking into a classroom to fetch Jonnie. Instead of walking in and asking for Jonnie, you walk in and grab the student closest to the door, whisper into his ear "Are you Jonnie?". If he says "no", you walk out the door and do not complete your objective.

  2. using AppSettings for the connection string instead of ConnectionStrings (non-idiomatic)

    Since 2005, .net has a config section dedicated to connection strings.

  3. declaring unused variables (ie copy and paste programming)

  4. declaring unnecessary temporary variables (eg returnConnString)

  5. unnecessarily calling ToString

    Getting an AppSetting value implicitly returns a string. Calling ToString simply gives you the opportunity to fail with the generic error message "object not set to instance of object" instead of letting the caller handle the missing value case.

  6. try/catch is totally unnecessary (since ToString should not be called)

    Furthermore, try/catch blocks should not be put into most methods. Unhandled exceptions should be allowed to pop up the stack to a global exception handler.

  7. mixing html into data access code

    If I have to explain why this is bad, you need to find another career.

  8. commented out code still hanging around (lazy coding or perhaps they don't use source control!!!)

  9. naming the connection string key "ConnectionString" instead of "MicroMainConnectionString"

    This assumes that the application is the only application deployed on the server.

The disaster continues

  • no global exception handler (ie yellow screen of death)
  • SQL Injection attacks

    Looking through other methods within their code, I found them not using parametrized queries. Instead, they were building SQL by string concatenation which leaves them wide open to SQL injection attacks.

Makes you think...

Looking at their website and taking into account the fact that they sold their software to Newell-Rubbermaid (a Fortune 500 company), I get the impression that they have better salespeople than developers. Their code was head-scratchingly bad. It makes you wonder about the state of code in other off-the-shelf applications. Is it all this bad, or was this an anomaly?

As Jeff Atwood would say "Enterprisey to the bone".

Saturday, August 6, 2011

Alaska: The Last Frontier

Skip to the Photos

As I left work the Monday after returning from Alaska, I realized exactly who I resembled; Al Pacino from end of Insomnia. Fitting since that movie was set in Alaska. Alaska will do that to you. Especially in the summer when it stays light out almost all day long.

That fact was the first thing that struck us on our trip. While we were in Fairbanks we could look out our hotel room at 12:30AM and still see all the way across the town. It wasn't until we drove farther south into Homer that we were awake while it was dark outside.

From Alaska

Most Alaskans are very similar to "Americans" as they call us lower48ers. However, there are some notable differences.

One difference is gun ownership. Most Alaskans own, carry, and use guns on a regular basis. This is a big difference from your average Northern Virginian; At least the ones I associate with. Heidi's (my wife) grandfather has a huge gun collection. In fact, he keeps a vintage, working condition 1848 Sharps rifle under the couch in his living room. That discovery seemed astonishing to me; Both that he owns such an antique and that he keeps it under his couch. He also has a working 1898 Winchester shotgun in his kitchen resting against the counter. Heidi's uncle has a AK-47-like clone casually resting against the bookshelf in his living room. So it is a different culture from what we are used to here.

I don't know if most Alaskans also identify themselves as "westerners", but our experience indicated that they share similar traits. One such trait was an assertiveness with strangers. Whereas eastern Americans are often considered to be closed off to interacting with strangers, Alaskans are not. We met a lot of people in public areas who simply started talking to us. To me, as someone is as reserved as any easterner, this took some getting used to. My wife loved it. She's more outgoing. Perhaps this phenomena was simply a function of us spending most of our time in high traffic tourist areas. The sort where the locals are wont to give suggestions. However, often it wasn't. During our last night in Alaska, we were at a bar in Anchorage sitting a long, high table when a guy and his girlfriend sat down next to us and started up a conversation. They weren't giving us suggestions or selling us anything. They simply wanted to talk. That type of experience seems a lot more unlikely to happen in Virginia.

From Alaska

Seeing wild animals native to Alaska was one of the best ongoing experiences we had. The interesting animals we saw included: moose, bald eagles, a sea otter, and orca whales (killer whales).

Moose are like the Alaskan deer. The are everywhere along the side of the road in Alaska.

We saw the orca whales the first night we stayed in Homer. We were on the "Spit". The Spit is a 3 mile narrow strip of land that juts out from the town. We had just exited the famous Salty Dawg bar when Heidi says, "let's walk down to beach". So we walk down and stay just long enough for her to put her feet in the water. We turn to walk back when she catches a glimpse of something briefly pop up out of the water about 30 feet from shore. I sort of see it. She said, "I think those were orcas". I said, "What's an orca?". So we stood there for a second and sure enough, the two whales pop back up briefly. Their dorsal fins and black color where unmistakable.

Our trip was split into two halves. The first half was spent with Heidi's family. The second was more of a personal vacation for us. Most of Heidi's family lives in Palmer, Alaska. Palmer is about 45 minutes away from Anchorage. For you Palin fans or haters out there, Palmer is 5 miles away from Wasilla. We spent a couple of days there, then we drove to Fairbanks which is about a 6 hour drive away. We were in Fairbanks for Heidi's cousin's wedding. After leaving Fairbanks, we stayed with Heidi's aunt and uncle in their cabin located between Delta Junction and Tok. After staying with them for a couple of days, we returned to Palmer via Tok. This is the point in the trip when Heidi and I left her family and went on our own. We traveled south onto the Kenai peninsula after first staying one night at Alyeska resort in Girdwood. Our journey south brought us to Homer where we stayed for two days. After that, it was time to travel back to Anchorage and then back to Virginia.

Wednesday, July 6, 2011

Common Lisp lazy sequences

When we left things in Part 2, we needed a function that would return an infinite sequence of the same argument. In Clojure, we saw that function is used like so:

(repeat "+") ;; "+" is the argument to repeat infinitely

Common Lisp does not have infinite sequences built in, but they are easy to implement. The Common Lisp library clazy is an implementation of them. For the sake of demonstration, we will sort of reinvent that implemenation in a simpler fashion.

Lazy sequences

Without further ado, here is the code to implement a very rudimentary lazy sequence library:

(defpackage my-lazy
  (:use :cl)
  (:shadow cl:cdr cl:mapcar)
  (:export "CDR" "MAPCAR" "DELAY" "FORCE" "REPEAT"))

(in-package :my-lazy)

(defstruct thunk

(defun thunkp (arg)
  (eq (type-of arg) 'thunk))

(defmacro my-lazy:delay (expr)
   :body (lambda () ,expr)))

(defun my-lazy:force (thunk)
  (if (thunkp thunk)
      (funcall (thunk-body thunk))

(defun my-lazy:repeat (arg)
  (cons arg
    (delay (repeat arg))))

(defun my-lazy:cdr (cons)
  "cdr for lists, force cdr for thunks"
  (force (cl:cdr cons)))

(defun my-lazy:mapcar (f list &rest more-lists)
  "Apply FUNCTION to successive elements of LIST. 
Return list of FUNCTION return values.
lists can be lazy"
  (cons (apply f
           (car list)
           (cl:mapcar 'car more-lists))
    (when (and (cdr list) (every #'identity more-lists))
      (apply 'mapcar
         (cdr list)
         (cl:mapcar 'cdr more-lists)))))

The first thing you'll notice is that we create a lazy package called "my-lazy" that when used will shadow some core functions (cdr and mapcar). This shadowing is necessary because we need these functions to operate similarly regardless of if the sequence is a list or a lazy list.

Also, you'll notice that we create a structure called a "thunk" (eg defstruct thunk). Wikipedia defines a thunk as:

In computer science, a thunk (also suspension, suspended computation or delayed computation) is a parameterless closure created to prevent the evaluation of an expression until forced at a later time.

We define a thunk as a structure instead of simply a parameterless lambda so that our thunks have a type unique from any other parameterless function.

The two key items here are delay and force. Delay is a macro that creates a thunk from an expression. Force is a function that forces the evaluation of a thunk.

Cdr is redefined to "force" evaluation of the core cdr of a cons cell if the core cdr is a thunk. Otherwise, our cdr simply returns the core cdr of the cons cell.

Mapcar is also redefined. Basically, it is only redefined to use our version of cdr. Besides that, it does basically the same thing as the core mapcar.

Now let's check our example. First in Clojure:

user> (map (partial str "price & tip: ") [5000 100 50] (repeat "+") [2000 40 10])
("price & tip: 5000+2000" "price & tip: 100+40" "price & tip: 50+10")

Next using our new package in Common Lisp:

CL-USER> (in-package :my-lazy)

MY-LAZY> (mapcar (partial 'str "price & tip: ") 
          '(5000 100 50) (repeat "+") '(2000 40 10))
("price & tip: 5000+2000" "price & tip: 100+40" "price & tip: 50+10")

Huzzah! We have successfully replicated standard Clojure functionality in Common Lisp.


In Part 1 we looked at modifying the Common Lisp syntax using reader macros. In Part 2 we looked at simplifying function calls using currying techniques. Here, in Part 3, we implemented infinite sequences which eliminated a (sometimes) hard to find bug.

The great thing about Common Lisp is that it is extremely easy to implement all of these concepts. In most other languages, implementing these concepts would almost definitely be either more difficult or not possible. The only drawback is that often there is too much freedom. The Common Lisp ethos is such that developers are encoraged to implement these concepts on their own. The result is, unlike the Clojure world, often there is no cannonical answer to these problems (such as lazy sequences). As a fan of Common Lisp, I'd like to see it gain more traction. Perhaps that will take one killer application (like Rails for the Ruby language). For that to happen, it will probably also need more coherance across the community. Regardless, because of its flexibility, it is still, in my opinion, the ultimate hacker language.

Common Lisp currying

In Part 1 we looked at adding to the Common Lisp syntax using a reader macro. Now we are going to use a regular function to implement a technique called "currying". We do this to abstract away a function call with a common argument. So instead of calling the same function with the same argument over and over again in your code, you can curry that function call w/ argument into a symbol that can be called on its own.


Wikipedia defines currying as:

In mathematics and computer science, currying is the technique of transforming a function that takes multiple arguments (or an n-tuple of arguments) in such a way that it can be called as a chain of functions each with a single argument (partial application).

In Part1, we needed to curry the function "concatenate" with the argument "'string". There are a number of ways to do this. We are going to build a currying function called partial that approximates the Clojure function of the same name.


The example which uses the partial function in Clojure comes from here. Below is the same example using the #() reader macro from Part 1 along with a slightly more readable and compact version using partial.

user> (map #(apply str "price & tip: " %&) 
              [5000 100 50] (repeat "+") [2000 40 10])
("price & tip: 5000+2000" "price & tip: 100+40" "price & tip: 50+10")

user> (map (partial str "price & tip: ") 
              [5000 100 50] (repeat "+") [2000 40 10])
("price & tip: 5000+2000" "price & tip: 100+40" "price & tip: 50+10")

Notice how the use of partial gets us out of the prosaic work of using a reader macro and "applying" a list of arguments to the magic "%&" argument. The "%&" list argument is implied when we use partial.

Partial turns out to be another trivially easy function to write in Common Lisp.

(defun partial (f &rest args)
  "currying function"
  (lambda (&rest more-args)
    (apply f (append args more-args))))

Essentially, we are merging together the arguments we know with the arguments we will be passing into the curried function. Let's demonstrate it in action.

CL-USER> (mapcar [apply 'concatenate 'string "price & tip: " %&] 
          '("5000" "100" "50") (loop repeat 3 collect "+") '("2000" "40" "10"))
("price & tip: 5000+2000" "price & tip: 100+40" "price & tip: 50+10")

CL-USER> (mapcar (partial 'concatenate 'string "price & tip: ") 
          '("5000" "100" "50") (loop repeat 3 collect "+") '("2000" "40" "10"))
("price & tip: 5000+2000" "price & tip: 100+40" "price & tip: 50+10")

CL-USER> (setf (symbol-function 'concatenate-string) 
          (partial 'concatenate 'string))

CL-USER> (mapcar (partial 'concatenate-string "price & tip: ") 
          '("5000" "100" "50") (loop repeat 3 collect "+") '("2000" "40" "10"))
("price & tip: 5000+2000" "price & tip: 100+40" "price & tip: 50+10")

Now we are getting somewhere. Using partial we've been able to turn the symbol 'concatenate-string into almost the same thing as the Clojure function "str".

String methods

The more astute readers may notice the Common Lisp examples still contain a little extra cruft not found in the Clojure examples.

The two pieces of cruft I am referring to are:

  • The Common Lisp version contains double quotes around the price literals. Thus starting them out as strings. The Clojure version simply uses numbers and within the "str" function they get converted to strings. We'll adapt the Common Lisp version to do the same.
  • The more dastardly of the crufty items is the lack of Clojure's "repeat" function. The Clojure "repeat" function simply repeats the argument infinitely as a sequence of items. This works in our example because map stops when one of the finite list args runs out of items. Our Common Lisp version relies on us to use the "loop" macro. In using "loop", we must specify an iteration value that matches exactly the length of the longest of the other list args. Failure to do so will cause our code to short circuit. This can be the source of maintenance bugs in production code.

Let's tackle the first piece of cruft. Here's the Common Lisp code:

(defmethod to-string (arg) (string arg))
(defmethod to-string ((arg integer)) (write-to-string arg))

(defun str (&rest args)
  (apply 'concatenate-string  (mapcar #'to-string args)))

The first thing to notice is the use of "defmethod" instead of "defun". Defmethod is a way to create functions that are dispatched differently at runtime based on argument type. So if we pass an integer argument to "to-string", the result will be:

(write-to-string arg)

Whereas, if we pass an argument of any other type to "to-string" the result will be:

(string arg)

In other words, the latter "result" function is the default, whereas the former is a specific type implemenation. This gives us great flexibility to extend "to-string" in the future without redefining any current methods.

The "str" function behaves just like the Clojure equivalent. It takes in a list of arguments and turns them all into strings before sending them along to our "concatenate-string" function.

Approximating the Clojure "repeat" function is a little more tricky. It will require us to implement infinite lists. Part 3 will demonstrate how to do that.

Common Lisp reader macros

Lisp has been called the programmers language by many people. Usually, it is called that because Lisp, unlike more conventional languages, gives the programmer the means to modify the language into something else entirely. I am going to look at examples of code written in other Lisps and using Common Lisps' macro and function facilities, write the equivalent in Common Lisp.

Reader Macros

Part 1 will demonstrate the use of reader macros. Reader macros allow you to literally change the Common Lisp syntax. We are going to add support for brackets as anonymous functions. This will approximate the syntax Paul Graham has created in Arc Lisp. It will also approximate the #() reader macro built into Clojure.

Quick aside regarding the Common Lisp template syntax

I'm going to be using the Common Lisp template syntax for the macros demonstrated below. I'm going to assume you understand it. If not, this is a good beginner tutorial on Common Lisp macros and the template syntax.

Arc Lisp

To explain brackets as anonymous functions, here are some usage examples in Arc Lisp:

arc> ([* _ 10] 3)

arc> (map [* _ 3] '(1 2 3 4))
(3 6 9 12)

Now for comparison, here are the same examples expanded into long form:

arc> ((fn (_) (* _ 10)) 3)

arc> (map (fn (_) (* _ 3)) '(1 2 3 4))
(3 6 9 12)

As you can see, the short form reader macro is (in my opinion) easier to both read and type. It saves at least 4 parens each use. So it is a valuable abstraction.

Common Lisp does not have this macro built in. So let's create it. First off, this blog post is an excellent beginner tutorial on Common Lisp reader macros. After reading it, it became clear to me that implementing Arc-style anonymous functions using brackets would be trivially easy.

(set-macro-character #\[
         (lambda (stream char)
           (let ((sexp (read-delimited-list #\] stream t)))
         `(lambda (_) (,@sexp)))))

(set-macro-character #\]
         (get-macro-character #\)))

Ok, so the set-marco-character function takes in two arguments. The first argument is the character to watch for. The second argument is the function to execute when that character is typed.

The first use of set-macro-character is saying, "When an open bracket character is typed, read the following stream of characters into an s-expression, then return an anonymous function with that s-expression as the body of the function. The function read-delimited-list, reads a stream of characters into a list (delimited by space by default) until it reaches a close bracket character as is its first argument.

The second set-macro-character says, "Treat close bracket characters as if they were closing parens".

Let's try it out.

CL-USER> ([* _ 10] 3)

CL-USER> (mapcar [* _ 3] '(1 2 3 4))
(3 6 9 12)

Yea! It works. Let's not stop there.

emacs support

If you are like me and use emacs with paredit to do your Lisp hacking, you are going to want to have emacs treat brackets the same way it treats parens. Here's what you need to add to your .emacs:

(modify-syntax-entry ?[ "(]" lisp-mode-syntax-table)
(modify-syntax-entry ?] ")[" lisp-mode-syntax-table)


The problem with the reader macro we've just implemented is that it only expands to a one argument function. Not very versatile.

The Clojure anonymous function reader macro, #(), can take one argument, multiple numbered arguments, or a "rest" list argument. Here's what they look like in action:

user> (#(* % 10) 3)

user> (map #(* % 3) [1 2 3 4])
(3 6 9 12)

user> (map #(* %1 %2) [1 2 3 4] [5 6 7 8])
(5 12 21 32)

user> (map #(apply str %&) ["hello, " "clojure, "] ["world" "rocks"])
("hello, world" "clojure, rocks")

Let's implement all 3 of these in Common Lisp.

;; clojure idiom
(require 'cl-ppcre)
(defun numbered-arg-as-string (arg)
  (cl-ppcre:scan-to-strings "^%\\d+$" (string arg)))

(defun single-arg-as-string (arg)
  (let ((sarg (string arg)))
    (when (string-equal "%" sarg)

(defun arc-arg-as-string (arg)
  (let ((sarg (string arg)))
    (when (string-equal "_" sarg)

(defun rest-arg-as-string (arg)
  (let ((sarg (string arg)))
    (when (string-equal "%&" sarg)

(defun flatten (l)
  "flattens a list"
  (cond ((null l) l)
      ((atom l) (list l))
    (t (append (flatten (car l))
           (flatten (cdr l))))))

(defun make-arg-list (predicate delimited-list)
  (labels ((string-list (delimited-list)
         (mapcar (lambda (x)
               (cond ((symbolp x) (funcall predicate x))
                 ((listp x) (string-list x))))
    (remove-duplicates (mapcar #'intern
                   (sort (flatten (string-list delimited-list))
                     #'string-lessp) ;; BUG: if more than 9 numbered arguments are used

;; first check for numbered args, 
;; then for a single % arg, 
;; finally default to a single _ arg
;; swallow the rest args to get around style warnings
(set-macro-character #\[
             (lambda (stream char)
               (let* ((sexp (read-delimited-list #\] stream t))
                  (args (make-arg-list #'numbered-arg-as-string sexp))
                  (rest-args (make-arg-list #'rest-arg-as-string sexp))
                  (rest-arg (or (car rest-args) (gensym))))
             (unless args
               (setf args (make-arg-list #'single-arg-as-string sexp)))
             (unless args
               (setf args (make-arg-list #'arc-arg-as-string sexp))) ;; arc idiom (_)
             `(lambda (,@args &rest ,rest-arg) (identity ,rest-arg) (,@sexp)))))

(set-macro-character #\]
             (get-macro-character #\)))

So this is a lot more code than the Arc-style implementation. Really, it is the same thing. The only difference is that we need to look inside the s-expression "sexp" and find all the possible arguments. That is what make-arg-list does. A predicate is passed to make-arg-list to check for the different types of args available (eg % or %1, %2, ... or _). make-arg-list then needs to return only the distinct arguments found sorted in ascending order. This list will be turned into ,@args in our lambda expression. Also, if the argument %& is found in the s-expression, it will be turned into the &rest arg in our lambda expression.

Couple of interesting tidbits. If %& is not used in the s-expression (our code body that is), but more arguments are passed to our resulting lambda than we are accounting for, we are swallowing them in a &rest argument whose name is a (gensym) symbol. In other words, we don't care about them at all but we still want our code to run without error. If we do that, some Common Lisps (ie sbcl) will give you a warning about declaring an argument without using it. We get around seeing that warning by calling the identity function on the anonymous &rest argument before executing our code body. Yes, this wastes a CPU cycle, but who cares. For our purposes, seeing that warning message everytime seems way more annoying.

The same examples in Common Lisp now look like:

CL-USER> ([* % 10] 3)

CL-USER> (mapcar [* % 3] '(1 2 3 4))
(3 6 9 12)

CL-USER> (mapcar [* %1 %2] '(1 2 3 4) '(5 6 7 8))
(5 12 21 32)

CL-USER> (mapcar [apply 'concatenate 'string %&] 
          '("hello, " "common-lisp, ") '("world" "rocks"))
("hello, world" "common-lisp, rocks")

All of these examples were looking great until we got to the last one. Why do we have to remember to type the symbol 'string before the rest variable? This is precisely the kind of esoteria we should be striving to abstract away. Abstracting that away is what we will be doing in Part 2.

Friday, June 3, 2011

Hacker News in the terminal - written in Common Lisp

hackernews - git

Are you sick of those pesky kids and their new fangled "web browsers"? Well, rejoice. Now you can browse your favorite site (Hacker News, of course) without ever leaving the friendly confines of your terminal session.

Watch a short video of hackernews in action.

Wednesday, April 6, 2011

The difference between Facebook and Twitter

I feel like posting on Facebook makes you question the amount you are willing to share with the world. Posting on Twitter stokes your need for affirmation.

Wednesday, February 9, 2011

Why ASP.Net?

At work, I've been an ASP.Net developer for several years now. It's an easy platform to get started with. It seems its original purpose was to approximate desktop applications. To the degree it does that, it has succeeded. I'm going to argue in this post that approximating desktop applications obscures the true nature of the web and in the long run makes web programming more difficult.

In addition to ASP.Net, I also have experience in non-.Net web environments. Before my .Net days, I did some classic ASP. I've also created a Ruby on Rails site for my wife's father (Hiller Enterprises). So I get that the underlying model of the web is very different from the postback model of ASP.Net.

(note: this post only concerns traditional ASP.Net and not ASP.Net MVC)

This underlying model of the web is to take a request from a browser and return to it a response. Typically, this response will be HTML. The browser will then  interpret and display the HTML in a meaningful way.

The request sent to the browser will be either a GET or a POST. By convention, a GET is something you do when you want to "pull" data from a site. By virtue of that fact that you are reading this blog post, your browser did a GET to view it. A POST is what I did when I published this article to the web. I POSTed the text you are reading to blogger.com.

When you do a POST on the web, what you are doing is clicking on a submit button which happens to be nested within an HTML form tag. That is how a POST is triggered. That is the only way a POST is triggered. (Well, except via Javascript. Using Javascript you can set the "click" event on a element (eg an anchor tag) to trigger a submit. That is how ASP.Net Linkbuttons work.)

A typical form with submit button

OK, so now we know how to trigger a POST, but where does the form's data go. It goes to the URL specified in the form's "action" attribute. The web server connected with that URL then runs code to do the action the user intended using the form's data (e.g publishes a blog post).

Often in a PHP, Ruby, Python, Perl, etc environment you will see multiple forms on a page. This is because pages can get complex enough for the user to do multiple things on a single page. Cool! So you can Login or Logout or Add to Cart or Change your Facebook status all from the same page.

So what is different about ASP.Net? Well, ASP.Net uses what is called a server-side postback architecture. ASP.Net pages are restricted to just one form element. The action for that form always points back to URL corresponding to the page it is in. In other words, clicking on a submit button on a page POSTs the page back to itself.

So given a page with multiple buttons, how can we tell which button was clicked? Well, that is done via hidden input element on the page with the ID of __EVENTTARGET. It is set via Javascript right before the submit occurs. When the POST request reaches the web server, based on the value of the __EVENTTARGET field, the ASP.Net framework fires the corresponding event. It is up to the programmer's code to handle the event that was fired.

An ASP.Net form

So given these complications, what is the advantage of ASP.Net? Why would anyone use it?

I've distilled that answer down to one word.


Yep, that's it. ViewState is the only advantage. Everything else can be done just as (or more) easily in traditional web programming.

For those who don't know, ViewState is a magic hidden variable put into every ASP.Net page. It contains a Base64 encoded string of key, value pairs. Each key, value pair corresponds with every HTML element found on the page. As you might imagine, it can grow very large.

A typical block of viewstate

Its purpose is to maintain page state throughout postbacks. So if you can imagine a desktop application, the data in the fields is always preserved after any button is clicked. Viewstate is the mechanism to emulate that in the ASP.Net world.

So the programmer can set the fields once when the page is first requested (on the GET) and ASP.Net will continue to re-populate them every time a user clicks a button on the page (which causes a POST). This is fairly convenient especially for the novice programmer. You don't have to re-pull the data from the database every time the page is posted back.

In my opinion, this convenience breaks down upon closer inspection. If Viewstate didn't exist, the web programmer would simply need to explicitly redirect back to the same page after a submit. In general, this might not be a bad idea since it re-requests the page as a GET instead of a POST. This allows the user to refresh the page without getting the dreaded POST warning popup.

The dreaded POST refresh warning

I've also made a claim that everything else can be done just as or more easily in traditional web programming. What exactly do I mean by that? What constitutes everything else? The list I'm thinking of is Javascript and AJAX, Restful URL representation, HTML validation, and CSS.

Javascript is famously difficult to code in ASP.Net applications. The primary reason being that the server side IDs for ASP.Net presentation controls (eg Textboxes, Dropdowns, etc) change once those controls are rendered as HTML. In the ASP.Net form example above, "txtName" is the ID the programmer gave to the Textbox. "ctl00_MainContentPlaceHolder_txtName" is the ID rendered in the browser. In other words, "ctl00_MainContentPlaceHolder_txtName" is the ClientID. Since there is only one form on the page, this "namespacing" of IDs is necessary to keep them unique.

This limitation can be worked around. However, it often restricts you from keeping all of your JS code together in one place on the page. Thus, debugging JS is monstrously difficult in ASP.Net.

Likewise, CSS is difficult for the very same reason. It often causes you to create more markup than you actually need. For example, if you want to style something by ID in CSS (using the # prefix), this element cannot be a "server" control since its ID must not change. If you also need to manipulate it on the server, you'll have to create a wrapper server control around said HTML control. You now have markup that only exists to get around framework limitations.

ASP.Net applications are not easily made Restful. In fact it is usually easy to spot an ASP.Net application since the "page" part of the URL ends in .aspx. A typical Restful URL might look like:


In ASP.Net it would look something like:


Another near impossibility is HTML validation. That is because if you use Microsoft's out-of-box controls such as Textbox, Linkbutton, Hyperlink, etc, you have no control of the resulting HTML sent as the response to the browser. So if you want your HTML to be valid you have two choices:

1) Only use native HTML controls instead of Microsoft's ASP.Net controls
2) Create your own server controls based on Microsoft's controls

Either option is viable but requires a substantial time investment to perfect.

So given what a horrible piece-of-crap ASP.Net is, is there ever a good reason to use it? In a word, CRUD applications. Essentially, ASP.Net is the web equivalent of the Microsoft Access database. If you need to quickly create a big form of data with a couple of buttons on it for users to save records, ASP.Net is better than the other frameworks (Maybe). If you are developing a site for the public internet, do not use ASP.Net. Virtually every other option is both easier and better.