Paul M. Jones

Don't listen to the crowd, they say "jump."

Different Definitions of Quality

Recently, I was pondering why it is that programmers and employers have different attitudes toward the quality of the projects they collaborate on. I formulated it like this:

  • The people who do the work are usually the ones who care more about quality. Why?

    • They have a reputation to maintain. Low quality for their kind of work is bad for reputation among their peers. Note that their peers are not necessarily their employers.

    • They understand they may be working on the same project later; higher quality means easier work later, although at the expense of (harder? more?) work now.

  • The people who are paying for the work care much less about quality. Why?

    • The reputation of the payer is not dependent on how the work is done, only that the work is done, and in a way that can be presented to customers. Note that the customers are mostly not the programmer’s peers.

    • They have a desire to pay as little as possible in return for as much as possible. “Quality” generally is more costly (both in time and in finances) in earlier stages, when resources are generally lowest or least certain to be forthcoming.

    • As a corollary, since the people paying for the work are not doing the work, it is easier to dismiss concerns about “quality”. Resources conserved earlier (both in time and money) means greater resources available later.

Dismissing quality concerns early may cause breaks and stoppage when the product is more visible or closer to deadline, thus leading to greater stress and strain to get work done under increasing public scrutiny. The programmer blames the lack of quality for the troubles, and the employer laments the programmer’s inability to work as quickly as he did earlier in the project.

Two Different Definitions

While the above analysis may be true, I realized later that I was approaching the problem from the wrong angle. It's not that one cares more about quality than the other. Instead, it is that they have two different definitions regarding project quality.

  • The programmer’s “quality” relates to the what he sees and works with regularly and is responsible for over time (the code itself).

  • The payer’s “quality” relates to the what he and the customers see and work with regularly and are responsible for over time (what is produced by running the code; i.e., the product, not the program).

That's the source of the disconnect. When approached in this way, "quality" as judged in one view is now obviously not the same thing as when judged in the other view; code quality and product quality are distinct from each other (although still related).

One interesting point is that the developer has some idea about the product quality (he has to use the product in some fashion while building it), but the manager/employer/payer has almost no idea about the code quality (they are probably not writing any code).

The solution to the disconnect in software development may be to involve someone who understands both sets of concerns, and who has the authority to push back against both sides as needed. Then the business as a whole can address the concerns of both sets of people.


Epilogue:

1. Thanks to Brandon Savage for reading and commenting on an earlier version of this article.

2. Incidentally, I think the "quality" definition disconnect also applies to various non-software crafts and trades. You hear about carpenters, plumbers, painters, etc. complaining that they get undercut on prices by low-cost labor who don’t have the same level of quality. And yet the customers who choose the lower-cost option are satisfied with the quality level, given their resource constraints. The developer-craftsman laments the low quality of the work, but the payer-customer just wants something fixed quickly at a low cost.


Markets Make Us More Rational

People, including economists, are imperfect decision makers because of their mental limitations. But this fact does not mean that markets fail. Indeed, markets do far more than induce improved allocation of resources, given wants and resources. Markets induce market participants to be more rational than they otherwise would be because they must pay a price for being irrational. Thus, markets allow--no, require--economists to assume that people are more rational than they are likely to be found to be in laboratory settings, absent meaningful information and incentives and absent market pressures.

via The Volokh Conspiracy » How Markets Make Us More Rational.


Grocery School

Suppose that we were supplied with groceries in same way that we are supplied with K-12 education.

Residents of each county would pay taxes on their properties.  A huge chunk of these tax receipts would then be spent by government officials on building and operating supermarkets.  County residents, depending upon their specific residential addresses, would be assigned to a particular supermarket.  Each family could then get its weekly allotment of groceries for “free.”  (Department of Supermarket officials would no doubt be charged with the responsibility for determining the amounts and kinds of groceries that families of different types and sizes are entitled to receive.)

Except in rare circumstances, no family would be allowed to patronize a “public” supermarket outside of its district.

...

Does anyone believe that such a system for supplying groceries would work well, or even one-tenth as well as the current private, competitive system that we currently rely upon for supplying grocery-retailing services?

via Grocery School.



Bad Man Down

Bin Laden’s death is not, as Peter Beinart suggests in the Daily Beast, the end of the war on terror.  Unfortunately a shadowy underworld of “Islamic” terror groups continue to pose an unprecedented threat around the world.  Unlike anarchist and communist terror groups in the past, they can kill hundreds and even thousands of people at a time, and they have the ability to disrupt commerce and the free flow of people around the world.  The threat that these groups could acquire chemical, biological or nuclear weapons of mass destruction is still very much alive; we live in an era in which non-state actors can wield levels of violence on a scale once restricted to states.

This underground, with links to organized crime, is opportunistic and evolving.  New leaders will emerge, new tactics will develop, and new attacks will come.  This remains a strategic threat, and whether we admit it or not, the state of war continues. We are winning that war by degrading the capacity and depressing the elan of these groups.  They are losing their popular support in most places; a decade of growing international cooperation has made the world’s counter terror measures significantly more effective.

So to amend Beinart, we are winning this war, but it isn’t over yet.

via Bad Man Down | Via Meadia.


Al-Qaida head bin Laden dead

Osama bin Laden, the mastermind behind the Sept. 11 attacks against the United States, is dead, and the U.S. is in possession of his body, a person familiar with the situation said late Sunday.President Barack Obama was expected to address the nation on the developments Sunday night.Two senior counterterrorism officials confirmed that bin Laden was killed in Pakistan last week. One said bin Laden was killed in a ground operation, not by a Predator drone. Both said the operation was based on U.S. intelligence, and both said the U.S. is in possession of bin Ladens body.

via Sources: Al-Qaida head bin Laden dead - Yahoo! News.


The Economics of Death Star Planet Destruction

For the Empire to actually exist as an institution, it needs to have the mechanisms in place to exist – namely, donks like Queen Amidala and Senator Jar Jar Binks who basically just sit around and handle boring government work. And you also need people everywhere. Like, if the Emperor controls everything, he needs to make sure every Speeder Registry office in every settlement on Tattooine has somebody working the counter except during major Imperial holidays. And he needs to pay them something (they can’t all just be clone slaves – that’s clearly not how the Empire works). If you don’t pay your people, they tend to first, be lazy, second, take bribes and be likely to betray you, and third, leave their posts or actively conspire against you.

To maintain order, the Emperor would generally need a MASSIVE, MASSIVE bureaucracy. The Old Republic built up a serviceable one over thousands of years, but that took a lot of time, money and effort, and in the end it was bloated, ineffective, and ultimately subverted against the Old Republic.

The more you spend on bureaucracy, the less control you have directly over your Empire. The less you spend on bureaucracy, the more you have to tighten your grip, and the more star systems slip through your fingers.

So, the Emperor and Tarkin focus on making one really huge, high-impact investment: The Death Star.

via Think Tank: The Economics of Death Star Planet Destruction » Print | Overthinking It.


Victims of Communism Day

May Day began as a holiday for socialists and labor union activists, not just communists. But over time, the date was taken over by the Soviet Union and other communist regimes and used as a propaganda tool to prop up their regimes. I suggest that we instead use it as a day to commemorate those regimes’ millions of victims. The authoritative Black Book of Communism estimates the total at 80 to 100 million dead, greater than that caused by all other twentieth century tyrannies combined. We appropriately have a Holocaust Memorial Day. It is equally appropriate to commemorate the victims of the twentieth century’s other great totalitarian tyranny. And May Day is the most fitting day to do so. I suggest that May Day be turned into Victims of Communism Day....

The main alternative to May 1 is November 7, the anniversary of the communist coup in Russia. However, choosing that date might be interpreted as focusing exclusively on the Soviet Union, while ignoring the equally horrendous communist mass murders in China, Camobodia, and elsewhere. So May 1 is the best choice.

via The Volokh Conspiracy » Victims of Communism Day.


Best Creamed Spinach Ever

By popular demand, here is the recipe.

Ingredients:

  • 2 packets of hollandaise sauce mix (dry). Typically you will also need 2 cups (one pint) of whole milk*, and 1/2 cup (a whole stick) of butter to prepare it.

  • 1 cup shredded parmesan or parmigiano reggiano

  • 2 10oz packs of frozen chopped spinach, thawed and pressed as dry as you can (use paper towels, or squeeze through a colander).

Preparation:

  1. Prepare the hollandaise sauce mix per its instructions.

  2. When then sauce is ready, remove from heat; add the shredded parmesan and mix until combined and melted.

  3. Add the chopped spinach, mix until combined, and return to low heat just until it is warmed through; overcooking will ruin the consistency.

Makes about 3 cups, enough for 12 quarter-cup servings (or 12 four-ounce servings).

Excellent as a side dish for a good steak.


* Whole milk, people, whole milk. None of this 2% or lowfat junk. It's decadent; deal with it.


UPDATE (2014-02-19): Use 3 packs of spinach if you want it a little less creamy. Lately I've preferred it that way.


Estimation Methodology: 2 Workers, 1 Day Per Controller Method

(This is an older draft I’ve had around for more than a year; rather than let it sit around while I ponder how to improve and expand it, I’m publishing it now so it can be useful in the mean time.)

Prerequisites

These are my prerequisites for building a reasonable estimate for client work. Note that this is primarily for team development of client work, not single-developer work (whether for clients or otherwise), but may be applicable in those situation as well.

  1. Business and functional requirements have been discussed back-and-forth until both the client and the work team are roughly satisfied they understand the system requirements as a whole and how it helps the business. This is not a waterfall condition, where one needs to know absolutely everything in advance; it is more like meeting-of-the-minds to make sure both the client and the work team believe they understand the business needs, how well the parties feel they can work together, and a moderately-detailed idea of how the resulting system will fulfill the business needs.

  2. For each sub-portion of the project to be estimated, the design team has put together a reasonable facsimile of the site as bare-bones wireframes. It is much better if the entire project is wireframed first, even though we know the requirements will change. As Eisenhower said, “Plans are nothing, but planning is everything.” This will give the client an idea of the scope of what he is asking for, and make it more concrete in everyone’s mind what the end-goal will look like.

    In addition, the wireframes help to ascertain that both the client and the work team have an agreed-upon picture of what the final product ought to look like; without this, it’s too easy for the client and the work team to think they understand each other when in fact they do not. Seeing the wireframes and how they fit together will illuminate inconsistencies, imperfections, and misunderstandings; these can be resolved well before development begins.

  3. Schemas (whether SQL or no-SQL) for the data models, as derived from the requirements and wireframes, are complete to a first approximation.

As you can see, I do not do estimates based on general narrative descriptions. I do them based on the wireframed feature sets. No wireframes, no estimations. This generally means the “discovery” period needs to be billed separately from the “development” period.

Two Workers, One Day Per Controller Method

Once the prerequisites are in place, I can build an estimate. For each page of the wireframes, or each substantial portion thereof, I estimate the project as a whole will take an average of one work day for two workers to complete. (Some pages will take less than a day, but some will take more.) Phrased another way, I usually estimate a project to take two workers one day for each page-controller method.

I arrived at this rule-of-thumb by looking at several past projects and dividing the actual calendar time by the number of controller methods in each project. It has proved a reliable guide ever since I started using it. One nice thing is that it takes into account all the real things that happen to cause delays; the natural optimism of developers and and their tendency to ignore the possibility of unexpected negative events is thereby removed.

To illustrate this, let’s say one component of the project ends up needing to browse, read, edit, add, and delete (BREAD) elements of a particular data model. Those would normally translate into five separate pages, or five separate methods in a page controller. (Even if they are AJAX-enabled portions of the same page, they would still be very likely to be separate method in a page controller.) My default position is that it will take two workers five days to complete that component for a client.

Two Workers?

The pair of workers are a mixed set composed of a primary PHP developer, and a secondary or support worker. The mix might change on a daily basis, and is certain to change at different times in the project for long projects. The mix could be be two PHP developers; or a PHP developer and a UI/UX developer; or a PHP developer and a system architect; or a PHP developer and a DBA.

As such, note that a worker-pair is not the same thing as Agile paired-programming. The worker-pair terminology is for determining the cost of development; the per-day terminology is for determining the calendar schedule for development.

I think the idea of the worker-pair accurately reflects the day-to-day reality of team development. Graphic design can proceed concurrently; as such, it may not affect the calendar schedule, but it certainly can affect the budget.

One Day Per Controller Method?

A whole day per controller method? Even though it includes all the pieces needed for that method, such as the views, model methods, and other support methods, it sounds really unrealistic. Any developer worth his salt can knock out an entire BREAD controller in half a day easy, right?

Perhaps if he’s working on his own project, to his fulfill his own needs. But when he’s working on a project for someone else, and has to coordinate his activities with a team, production velocity decreases predictably. This is because of the volume and frequency of communication that needs to occur to impart understanding.

Here are some scenarios for consideration:

  1. An individual developer working on his own project for his own reasons. He understands his own project, he doesn’t need to communicate with anyone else about the need for changes, or explain those modifications, or anything else. All those conversations happen inside his own skull, so latency is very low, even for unskilled developers.

  2. A developer working on a team, for that team’s own shared reasons. Communication latency is necessarily higher, since there is more than one developer, but all the team members are (in theory) very familiar with what they want to build, and are in constant short feedback loops.

  3. An individual developer working on a project, for a client he communicates with directly. This is the first point at which we see productivity drop off. The developer now needs to coordinate his development with the wishes, desires, and requirements of an external client whose business he probably does not participate in. The time needed to perform this communication is too often neglected when scheduling; developers often think this time “doesn’t count” when building a calendar estimate. The amount of time doing only the development on a controller method may be only an hour, but when communications are factored in, it takes longer.

  4. A developer working on a team, for a client he will rarely communicate with directly. This is the second point at which productivity drops off. Now the developer must communicate with an intermediary about the system requirements. That other person may be a developer senior to him, a system architect, or a project manager. In addition, the developer is likely to be working with at least one other developer to perform his tasks. The added communication delays factor in here as well.

Conclusion

Once I have the core estimate in place, it becomes possible to determine other portions of the project, especially things like setup overhead and deployment risks.