"Reality Distortion Field"

by Miguel de Icaza

"Reality Distortion Field" is a modern day cop out. A tool used by men that lack the intellectual curiosity to explain the world, and can deploy at will to explain excitement or success in the market place. Invoking this magical super power saves the writer from doing actual work and research. It is a con perpetuated against the readers.

The expression originated as an observation made by those that worked with Steve to describe his convincing passion. It was insider joke/expression which has now been hijacked by sloppy journalists when any subject is over their head.

The official Steve Jobs biography left much to be desired. Here a journalist was given unprecedented access to Steve Jobs and get answers to thousands of questions that we have to this day. How did he approach problems? Did he have a method? How did he really work with his team? How did he turn his passion for design into products? How did he make strategic decisions about the future of Apple? How did the man balance engineering and marketing problems?

The biography has some interesting anecdotes, but fails to answer any of these questions. The biographer was not really interested in understanding or explaining Steve Jobs. He collected a bunch of anecdotes, stringed them together in chronological order, had the text edited and cashed out.

Whenever the story gets close to an interesting historical event, or starts exploring a big unknown of Steve's work, we are condescendingly told that "Steve Activated the Reality Distortion Field".

Every. Single. Time.

Not once did the biographer try to uncover what made people listen to Steve. Not once did he try to understand the world in which Steve operated. The breakthroughs of his work are described with the same passion as a Reuters news feed: an enumeration of his achievements glued with anecdotes to glue the thing together.

Consider the iPhone: I would have loved to know how the iPhone project was conceived. What internal process took place that allowed Apple to gain the confidence to become a phone manufacturer. There is a fascinating story of the people that made this happen, millions of details of how this project was evaluated and what the vision for the project was down to every small detail that Steve cared about.

Instead of learning about the amazing hardware and software engineering challenges that Steve faced, we are told over and over that all Steve had to do was activate his special super power.

The biography in short, is a huge missed opportunity. Unprecedented access to a man that reshaped entire industries and all we got was some gossip.

The "Reality Distortion Field" is not really a Steve Jobs super-power, it is a special super power that the technical press uses every time they are too lazy to do research.

Why do expensive and slow user surveys, or purchase expensive research from analysts to explain why some product is doing well, or why people are buying it when you can just slap a "they activated the Reality Distortion Field and sales went through the roof" statement in your article.

As of today, a Google News search for "Reality Distortion Field Apple" reports 532 results for the last month.

Perhaps this is just how the tech press must operate nowadays. There is just no time to do research as new products are being unveiled around the clock, and you need to deliver opinions and analysis on a daily basis.

But as readers, we deserve better. We should reject these explanations for what they are: a cheap grifter trick.

Posted on 07 Nov 2012


Mono 3.0 is out

by Miguel de Icaza

After a year and a half, we have finally released Mono 3.0.

Like I discussed last year, we will be moving to a more nimble release process with Mono 3.0. We are trying to reduce our inventory of pending work and get new features to everyone faster. This means that our "master" branch will remain stable from now on, and that large projects will instead be developed in branches that are regularly landed into our master branch.

What is new

Check our release notes for the full details of this release. But here are some tasty bits:

  • C# Async compiler
  • Unified C# compiler for all profiles
  • 4.5 Async API Profile
  • Integrated new Microsoft's Open Sourced stacks:
    • ASP.NET MVC 4
    • ASP.NET WebPages
    • Entity Framework
    • Razor
    • System.Json (replaces our own)
  • New High performance Garbage Collector (SGen - with many performance and scalability improvements)
  • Metric ton of runtime and class library improvements.

Also, expect F# 3.0 to be bundled in our OSX distribution.

Posted on 22 Oct 2012


The Sophisticated Procrastinator - Volume 1

by Miguel de Icaza

Let me share with you some links that I found interesting in the past few weeks. These should keep the most diligent person busy for a few hours.

Software Reads

Talbot Crowell's Introduction to F# 3.0 slides from Boston CodeCamp.

Bertrand Meyer (The creator of Eiffel, father of good taste in engineering practices) writes Fundamental Duality of Software Engineering: on the specifications and tests. This is one of those essays where every idea is beautifully presented. A must read.

Good article on weakly ordered CPUs.

MonkeySpace slide deck on MonoGame.

David Siegel shares a cool C# trick, switch expressions.

Oak: Frictionless development for ASP.NET MVC.

Simon Peyton Jones on video talks about Haskell, past, present and future. A very tasty introductory talk to the language. David Siegel says about this:

Simon Peyton-Jones is the most eloquent speaker on programming languages. Brilliant, funny, humble, adorable.

Rob Pike's talk on Concurrency is not Parallelism. Rob is one of the crisper minds in software development, anything he writes, you must read, everything he says, you must listen to.

Answering the question of what is the fastest way to access properties dynamically: DynamicMethod LINQ expressions, MethodInfo. Discussion with Eric Maupin.

OpenGL ES Quick Reference Card, plus a good companion: Apple's Programming Guide.

Interesting Software

SparkleShare, the open source file syncing service running on top of Git released their feature-complete product. They are preparing for their 1.0 release. SparkleShare runs on Linux, Mac and Windows. Check out their Release Notes.

Experts warn that Canonical might likely distribute a patched version that modifies your documents and spreadhseets to include ads and Amazon referal links.

Pheed a twitter competitor with a twist.

Better debugging tools for Google Native Client.

Touch Draw comes to MacOS, great vector drawing application for OSX. Good companion to Pixelmator and great for maintaining iOS artwork. It has great support for structured graphics and for importing/exporting Visio files.

MonoGame 3D on the Raspberry Pi video.

Fruit Rocks a fun little game for iOS.

@Redth, the one man factory of cool hacks has released:

  • PassKitSharp, a library to generate, maintain, process Apple's Passbook files written in C#
  • Zxing.Mobile, an open source barcode library built on top of ZXing (Zebra Crossing) runs on iOS and Android.
  • PushSharp, A server-side library for sending Push Notifications to iOS (iPhone/iPad APNS), Android (C2DM and GCM - Google Cloud Message), Windows Phone, Windows 8, and Blackberry devices.

Coding on Passbook: Lessons Learned.

Building a Better World

Phil Haack blogs about MonkeySpace

Patrick McKenzie writes Designing First Run Experiences to Delight Users.

Kicking the Twitter habit.

Twitter Q&A with TJ Fixman, writer for Insomniac Games.

Debunking the myths of budget deficits: Children and Grandchildren do not pay for budget deficits, they get interest on the bonds.

Cool Stuff

Live updates on HoneyPots setup by the HoneyNet Project.

Updated Programming F# 3.0, 2nd Edition is out. By Chris Smith, a delightful book on F# has been updated to cover the new and amazing type providers in F#.

ServiceStack now has 113 contributors.

News

From Apple Insider: Google may settle mobile FRAND patent antitrust claim.

The Salt Lake City Tribune editorial board endorses Obama over Romney:

In considering which candidate to endorse, The Salt Lake Tribune editorial board had hoped that Romney would exhibit the same talents for organization, pragmatic problem solving and inspired leadership that he displayed here more than a decade ago. Instead, we have watched him morph into a friend of the far right, then tack toward the center with breathtaking aplomb. Through a pair of presidential debates, Romney’s domestic agenda remains bereft of detail and worthy of mistrust.

Therefore, our endorsement must go to the incumbent, a competent leader who, against tough odds, has guided the country through catastrophe and set a course that, while rocky, is pointing toward a brighter day. The president has earned a second term. Romney, in whatever guise, does not deserve a first.

From Blue States are from Scandinavia, Red States are from Guatemala the author looks at the differences in policies in red vs blue states, and concludes:

Advocates for the red-state approach to government invoke lofty principles: By resisting federal programs and defying federal laws, they say, they are standing up for liberty. These were the same arguments that the original red-staters made in the 1800s, before the Civil War, and in the 1900s, before the Civil Rights movement. Now, as then, the liberty the red states seek is the liberty to let a whole class of citizens suffer. That’s not something the rest of us should tolerate. This country has room for different approaches to policy. It doesn’t have room for different standards of human decency.

Esquire's take on the 2nd Presidential Debate.

Dave Winer wrote Readings from News Execs:

There was an interesting juxtaposition. Rupert Murdoch giving a mercifully short speech saying the biggest mistake someone in the news business could make is thinking the reader is stupid. He could easily have been introducing the next speaker, Bill Keller of the NY Times, who clearly thinks almost everyone who doesn't work at the NY Times is stupid.

What do you know, turns out that Bill Moyers is not funded by the government nor does he get tax money, like many Republicans like people to believe. The correction is here.

Twitter Quotes

Joseph Hill

"Non-Alcoholic Sparkling Beverage" - Whole Foods' $7.99 name for "bottle of soda".

Jonathan Chambers

Problem with most religious people is that their faith tells them to play excellently in game of life, but they want to be the referees.

Hylke Bons on software engineering:

"on average, there's one bug for every 100 lines of code" this is why i put everything on one line

Waldo Jaquith:

If government doesn't create jobs, isn't Romney admitting that his campaign is pointless?

Alex Brown

OH "It is a very solid grey area" #sc34 #ooxml

Jo Shields

"I don't care how many thousand words your blog post is, the words 'SYMBIAN WAS WINNING' mean you're too high on meth to listen too.

Jeremy Scahill on war monger Max Boots asks the questions

Do they make a Kevlar pencil protector? Asking for a think tanker.

Max Boot earned a Purple Heart (shaped ink stain on his shirt) during the Weekly Standard War in 1994.

Tim Bray

"W3C teams with Apple, Google, Mozilla on WebPlatform"... or we could all just sponsor a tag on StackOverflow.

David Siegel

Most programmers who claim that types "get in the way" had a sucky experience with Java 12 years ago, tried Python, then threw the baby out.

Outrage Dept

How Hollywood Studios employ creative accounting to avoid sharing the profits with the participants. If you were looking at ways to scam your employees and partners, look no further.

Startvation in Gaza: State forced to release 'red lines' document for food consumption.

Dirty tricks and disturbing trends: Billionaire warn employees that if Obama is reelected, they will be facing layoffs.

Israeli Children Deported to South Sudan Succumb to Malaria:

Here we are today, three months later, and within the last month alone, these two parents lost two children, and the two remaining ones are sick as well. Sunday is already in hospital with malaria, in serious condition, and Mahm is sick at home. “I’ve only two children left,” Michael told me today over the phone. The family doesn’t have money to properly treat their remaining children. The hospitals are at full capacity and more people leave them in shrouds than on their own two feet. I ask you, beg of you to help me scream the story of these children and their fate, dictated by the heartless, immoral Israeli government.

When Suicide is Cheaper the horrifying tales of Americans that can not afford health care.

Paul Ryan is not that different from Todd Akin, when it comes to women rights.

Interesting Discussions, Opinions and Articles

A Windows 8 critique: someone is not very happy with it.

On "meritocracy": what is wrong with it.

Fascinating read on the fast moving space of companies: Intimate Portrait of Innovation, Risk and Failure Through Hipstamatic's Lens.

Kathy Sierra discusses sexism in the tech world. How she changed her mind about it, and the story that prevented her from seeing it.

Response @antirez's sexist piece.

Chrystia Freeland's The Self Destruction of the 1% percent is a great article, which touches on the points of her book of Plutocrats

Sony's Steep Learning Process a look at the changing game with a focus on Sony's challenges.

Entertainment

One Minute Animated Primers on Major Theories on Religion.

Cat fact and Gif provides Cat facts with a gif that goes with it. The ultimate resource of cat facts and gifs.

Posted on 21 Oct 2012


Drowning Good Ideas with Bloat. The tale of pkg.m4.

by Miguel de Icaza

The gnome-config script was a precursor to pkg-config, they are tools used that you can run and use to extract information about the flags needed to compile some code, link some code, or check for a version. gnome-config itself was a pluggable version of Tcl's tclConfig.sh script.

The idea is simple: pkg-config is a tiny little tool that uses a system database of packages to provide version checking and build information to developers. Said database is merely a well-known directory in the system containing files with the extension ".pc", one per file.

These scripts are designed to be used in shell scripts to probe if a particular software package has been installed, for example, the following shell script probes whether Mono is installed in the system:

# shell script

if pkg-config --exists mono; then
    echo "Found Mono!"
else
    echo "You can download mono from www.go.mono.com"
fi

It can also be used in simple makefiles to avoid hardcoding the paths to your software. The following makefile shows how:

CFLAGS = `pkg-config --cflags mono`
LIBS   = `pkg-config --libs mono`

myhost: myhost.c

And if you are using Automake and Autoconf to probe for the existence of a module with a specific version and extract the flags needed to build a module, you would do it like this:

AC_SUBST(MONO_FLAGS)
AC_SUBST(MONO_LIBS)
if pkg-config --atleast-version=2.10 mono; then
    MONO_FLAGS=`pkg-config --cflags mono`
    MONO_LIBS=`pkg-config --libs mono`
else
   AC_MSG_ERROR("You need at least Mono 2.10")
fi

There are two main use cases for pkg-config.

Probing: You use the tool to pobe for some condition about a package and taking an action based on this. For this, you use the pkg-config exit code in your scripts to determine whether the condition was met. This is what both the sample automake and the first script show.

Compile Information: You invoke the tool which outputs to standard output the results. To store the result or pass the values, you use the shell backtick (`). That is all there is to it (example: version=`pkg-config --version`).

The tool is so immensely simple that anyone can learn every command that matters in less than 5 minutes. The whole thing is beautiful because of its simplicity.

The Siege by the Forces of Bloat

Perhaps it was a cultural phenomenon, perhaps someone that had nothing better to do, perhaps someone that was just trying to be thorough introduced one of the most poisoning memes into the pool of ideas around pkg-config.

Whoever did this, thought that the "if" statement in shell was a complex command to master or that someone might not be able to find the backtick on their keyboards.

And they hit us, and they hit us hard.

They introduced pkg.m4, a macro intended to be used with autoconf, that would allow you to replace the handful of command line flags to pkg-config with one of their macros (PKG_CHECK_MODULES, PKG_CHECK_EXISTS). To do this, they wrote a 200 line script, which replaces one line of shell code with almost a hundred. Here is a handy comparison of what these offer:

# Shell style
AC_SUBST(MONO_LIBS)
AC_SUBST(MONO_CFLAGS)
if pkg-config --atleast-version=2.10 mono; then
   MONO_CFLAGS=`pkg-config --cflags mono`
   MONO_LIBS=`pkg-config --libs mono`
else
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
fi

#
# With the idiotic macros
#
PKG_CHECK_MODULES([MONO], [mono >= 2.10],[], [
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
])

#
# If you do not need split flags, shell becomes shorter
#
if pkg-config --atleast-version=2.10 mono; then
   CFLAGS="$CFLAGS `pkg-config --cflags mono`"
   LIBS="$LIBS `pkg-config --libs mono`"
else
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
fi

The above shows the full benefit of using a macro, MONO is a prefix that will have LIBS and CFLAGS extracted. So the shell script looses. The reality is that the macros only give you access to a subset of the functionality of pkg-config (no support for splitting -L and -l arguments, querying provider-specific variable names or performing macro expansion).

Most projects, adopted the macros because they copy/pasted the recipe from somewhere else, and thought this was the right way of doing things.

The hidden price is that saving that few lines of code actually inflicts a world of pain on your users. You will probably see this in your forums in the form of:

Subject: Compilation error

I am trying to build your software, but when I run autogen.sh, I get
the following error:

checking whether make sets $(MAKE)... yes
checking for pkg-config... /usr/bin/pkg-config
./configure: line 1758: syntax error near unexpected token FOO,'
./configure: line 1758:PKG_CHECK_MODULES(FOO, foo >= 2.9)'

And then you will engage in a discussion that in the best case scenario helps the user correctly configure his ACLOCAL_FLAGS, create his own "setup" script that will properly configure his system, and your new users will learn the difference between running a shell script and "sourcing" a shell script to properly setup his development system.

In the worst case scenario, the discussion will devolve into how stupid your user is for not knowing how to use a computer and how he should be shot in the head and taken out to the desert for his corpse to be eaten by vultures; because, god dammit, they should have googled that on their own, and they should have never in the first place have installed two separate automake installations in two prefixes, without properly updating their ACLOCAL_FLAGS or figured out on their own that their paths were wrong in the first place. Seriously, what moron in this day and age is not familiar with the limitations of aclocal and the best practices to use system-wide m4 macros?

Hours are spent on these discussions every year. Potential contributors to your project are driven away, countless hours that could have gone into fixing bugs and producing code are wasted, you users are frustrated. And you saved 4 lines of code.

The pkg.m4 is a poison that is holding us back.

We need to end this reign of terror.

Send pull requests to eliminate that turd, and ridicule anyone that suggests that there are good reasons to use it. In the war for good taste, it is ok to vilify and scourge anyone that defends pkg.m4.

Posted on 20 Oct 2012


Why Mitt does not need an Economic Plan

by Miguel de Icaza

Mitt Romney does not need to have an economic plan. He does not need to have a plan to cut the deficit or to cut services.

It is now well understood that to get the US out of the recession, the government has to inject money into the economy. To inject money into the economy, the US needs to borrow some money and spend it. Borrowing is also at an all-time low, so the price to pay is very low.

Economists know this, and Republicans know this.

But the Republicans top priority is to get Obama out of office at any cost. Even at the cost of prolonging the recession, damaging the US credit rating and keeping people unemployed.

The brilliance of the Republican strategy is that they have convinced the world that the real problem facing the US is the debt. Four years of non-stop propaganda on newspapers and TV shows have turned everyone into a "fiscal conservative". The propaganda efforts have succeeded into convincing the world that US economic policy should be subject to the same laws of balancing a household budget (I wont link to this idiocy).

The campaign has been a brilliant and has forced the Democrats to adopt policies of austerity, instead of policies of growth. Instead of spending left and right to create value, we are cutting. And yet, nobody has stopped the propaganda and pointed out that growth often comes after spending money. Startups start in the red and are funded for several years before they become profitable; Companies go public and use the IPO to raise capital to grow, and for many years they lose money until their investments pay off and allows them to turn the tide.

So this mood has forced Obama to talk about cuts. He needs to be detailed about his cuts, he needs to be a fiscal conservative.

But Economists and Republicans know what the real fix is. They know they have to spend money.

If Romney is elected to office, he will do just that. He will borrow and spend money, because that is the only way of getting out of the recession. That is why his plan does not need to have any substance, and why he can ignore the calls to get more details, because he has no intention to follow up with them.

Obama made a critical mistake in his presidency. He decided to compromise with Republicans, he was begging to be loved by Republicans and in the process betrayed his own base and played right into the Republican's plans.

Posted on 04 Oct 2012


Mono 2.11.4 is out

by Miguel de Icaza

A couple of weeks ago we released Mono 2.11.4; I had not had time to blog about it.

Since our previous release a month before, we had some 240 commits, spread like this:

488 files changed, 28716 insertions(+), 6921 deletions(-)

Among the major updates in this release:

  • Integrated the Google Summer of Code code for Code Contracts.
  • Integrated the Google Summer of Code code for TPL's DataFlow.
  • Plenty of networking stack fixes and updates (HTTP stack, web services stack, WCF)
  • Improvements to the SGen GC.
  • TPL fixes for AOT systems like the iPhone.
  • Debugger now supports batched method invocations.

And of course, a metric ton of bug fixes all around.

Head over to Mono's Download Page to get the goods. We would love to hear about any bugs to have a great stable release.

Posted on 02 Oct 2012


TypeScript: First Impressions

by Miguel de Icaza

Today Microsoft announced TypeScript a typed superset of Javascript. This means that existing Javascript code can be gradually modified to add typing information to improve the development experience: both by providing better errors at compile time and by providing code-completion during development.

As a language fan, I like the effort, just like I pretty much like most new language efforts aimed at improving developer productivity: from C#, to Rust, to Go, to Dart and to CoffeeScript.

A video introduction from Anders was posted on Microsoft's web site.

The Pros

  • Superset of Javascript allows easy transition from Javascript to typed versions of the code.
  • Open source from the start, using the Apache License.
  • Strong types assist developers catch errors before the deploy the code, this is a very welcome addition to the developer toolchest. Script#, Google GWT and C# on the web all try to solve the same problem in different ways.
  • Extensive type inference, so you get to keep a lot of the dynamism of Javascript, while benefiting from type checking.
  • Classes, interfaces, visibility are first class citizens. It formalizes them for those of us that like this model instead of the roll-your-own prototype system.
  • Nice syntactic sugar reduces boilerplate code to explicit constructs (class definitions for example).
  • TypeScript is distributed as a Node.JS package, and it can be trivially installed on Linux and MacOS.
  • The adoption can be done entirely server-side, or at compile time, and requires no changes to existing browsers or runtimes to run the resulting code.

Out of Scope

Type information is erased when it is compiled. Just like Java erases generic information when it compiles, which means that the underling Javascript engine is unable to optimize the resulting code based on the strong type information.

Dart on the other hand is more ambitious as it uses the type information to optimize the quality of the generated code. This means that a function that adds two numbers (function add (a,b) { return a+b;}) can generate native code to add two numbers, basically, it can generate the following C code:

double add (double a, double b)
{
    return a+b;
}

While weakly typed Javascript must generated something like:

JSObject add (JSObject a, JSObject b)
{
    if (type (a) == typeof (double) &&
	type (b) == typeof (double))
	return a.ToDouble () + b.ToDouble ();
    else
	JIT_Compile_add_with_new_types ();
}

The Bad

The majority of the Web is powered by Unix.

Developers use MacOS and Linux workstations to write the bulk of the code, and deploy to Linux servers.

But TypeScript only delivers half of the value in using a strongly typed language to Unix developers: strong typing. Intellisense, code completion and refactoring are tools that are only available to Visual Studio Professional users on Windows.

There is no Eclipse, MonoDevelop or Emacs support for any of the language features.

So Microsoft will need to convince Unix developers to use this language merely based on the benefits of strong typing, a much harder task than luring them with both language features and tooling.

There is some basic support for editing TypeScript from Emacs, which is useful to try the language, but without Intellisense, it is obnoxious to use.

Posted on 01 Oct 2012


Free Market Fantasies

by Miguel de Icaza

This recording of a Q&A with Noam Chomsky in 1997 could be a Q&A session done last night about bailouts, corporate wellfare, and the various distractions that they use from keeping us in the dark, like caring about "fiscal responsibility".

Also on iTunes and Amazon.

Posted on 07 Sep 2012


2012 Update: Running C# on the Browser

by Miguel de Icaza

With our push to share the kernel of your software in reusable C# libraries and build a native experience per platform (iOS, Android, WP7 on phones and WPF/Windows, MonoMac/OSX, Gtk/Linux) one component that is always missing is what about doing a web UI that also shares some of the code.

Until very recently the answer was far from optimal, and included things like: put the kernel on the server and use some .NET stack to ship the HTML to the client.

Today there are two solid choices to run your C# code on the browser and share code between the web and your native UIs.

JSIL

JSIL will translate the ECMA/.NET Intermediate Language into Javascript and will run your code in the browser. JSIL is pretty sophisticated and their approach at running IL code on the browser also includes a bridge that allows your .NET code to reference web page elements. This means that you can access the DOM directly from C#.

You can try their Try JSIL page to get a taste of what is possible.

Saltarelle Compiler

The Saltarelle Compiler takes a different approach. It is a C# 4.0 compiler that generates JavaScript instead of generating IL. It is interesting that this compiler is built on top of the new NRefactory which is in turn built on top of our C# Compiler as a Service.

It is a fresh, new compiler and unlik JSIL it is limited to compiling the C# language. Although it is missing some language features, it is actively being developed.

This compiler was inspired by Script# which is a C#-look-alike language that generated Javascript for consuming on the browser.

Native Client

I left NativeClient out, which is not fair, considering that both Bastion and Go Home Dinosaurs are both powered by Mono running on Native Client.

The only downside with Native Client today is that it does not run on iOS or Android.

Posted on 06 Sep 2012


What Killed the Linux Desktop

by Miguel de Icaza

True story.

The hard disk that hosted my /home directory on my Linux machine failed so I had to replace it with a new one. Since this machine lives under my desk, I had to unplug all the cables, get it out, swap the hard drives and plug everything back again.

Pretty standard stuff. Plug AC, plug keyboard, plug mouse but when I got to the speakers cable, I just skipped it.

Why bother setting up the audio?

It will likely break again and will force me to go on a hunting expedition to find out more than I ever wanted to know about the new audio system and the drivers technology we are using.

A few days ago I spoke to Klint Finley from Wired who wrote the article titled OSX Killed Linux. The original line of questioning was about my opinion between Gnome 3's shell, vs Ubuntu's Unity vs Xfte as competing shells.

Personally, I am quite happy with Gnome Shell, I think the team that put it together did a great job, and I love how it enabled the Gnome designers -which historically only design, barely hack- to actually extend the shell, tune the UI and prototype things without having to beg a hacker to implement things for them. It certainly could use some fixes and tuning, but I am sure they will address those eventually.

What went wrong with Linux on the Desktop

In my opinion, the problem with Linux on the Desktop is rooted in the developer culture that was created around it.

Linus, despite being a low-level kernel guy, set the tone for our community years ago when he dismissed binary compatibility for device drivers. The kernel people might have some valid reasons for it, and might have forced the industry to play by their rules, but the Desktop people did not have the power that the kernel people did. But we did keep the attitude.

The attitude of our community was one of engineering excellence: we do not want deprecated code in our source trees, we do not want to keep broken designs around, we want pure and beautiful designs and we want to eliminate all traces of bad or poorly implemented ideas from our source code trees.

And we did.

We deprecated APIs, because there was a better way. We removed functionality because "that approach is broken", for degrees of broken from "it is a security hole" all the way to "it does not conform to the new style we are using".

We replaced core subsystems in the operating system, with poor transitions paths. We introduced compatibility layers that were not really compatible, nor were they maintained. When faced with "this does not work", the community response was usually "you are doing it wrong".

As long as you had an operating system that was 100% free, and you could patch and upgrade every component of your operating system to keep up with the system updates, you were fine and it was merely an inconvenience that lasted a few months while the kinks were sorted out.

The second dimension to the problem is that no two Linux distributions agreed on which core components the system should use. Either they did not agree, the schedule of the transitions were out of sync or there were competing implementations for the same functionality.

The efforts to standardize on a kernel and a set of core libraries were undermined by the Distro of the Day that held the position of power. If you are the top dog, you did not want to make any concessions that would help other distributions catch up with you. Being incompatible became a way of gaining market share. A strategy that continues to be employed by the 800 pound gorillas in the Linux world.

To sum up: (a) First dimension: things change too quickly, breaking both open source and proprietary software alike; (b) incompatibility across Linux distributions.

This killed the ecosystem for third party developers trying to target Linux on the desktop. You would try once, do your best effort to support the "top" distro or if you were feeling generous "the top three" distros. Only to find out that your software no longer worked six months later.

Supporting Linux on the desktop became a burden for independent developers.

But at this point, those of us in the Linux world still believed that we could build everything as open source software. The software industry as a whole had a few home runs, and we were convinced we could implement those ourselves: spreadsheets, word processors, design programs. And we did a fine job at that.

Linux pioneered solid package management and the most advance software updating systems. We did a good job, considering our goals and our culture.

But we missed the big picture. We alienated every third party developer in the process. The ecosystem that has sprung to life with Apple's OSX AppStore is just impossible to achieve with Linux today.

The Rise of OSX

When OSX was launched it was by no means a very sophisticated Unix system. It had an old kernel, an old userland, poor compatibility with modern Unix, primitive development tools and a very pretty UI.

Over time Apple addressed the majority of the problems with its Unix stack: they improved compatibility, improved their kernel, more open source software started working and things worked out of the box.

The most pragmatic contributors to Linux and open source gradually changed their goals from "an world run by open source" to "the open web". Others found that messing around with their audio card every six months to play music and the hardships of watching video on Linux were not worth that much. People started moving to OSX.

Many hackers moved to OSX. It was a good looking Unix, with working audio, PDF viewers, working video drivers, codecs for watching movies and at the end of the day, a very pleasant system to use. Many exchanged absolute configurability of their system for a stable system.

As for myself, I had fallen in love with the iPhone, so using a Mac on a day-to-day basis was a must. Having been part of the Linux Desktop efforts, I felt a deep guilt for liking OSX and moving a lot of my work to it.

What we did wrong

Backwards compatibility, and compatibility across Linux distributions is not a sexy problem. It is not even remotely an interesting problem to solve. Nobody wants to do that work, everyone wants to innovate, and be responsible for the next big feature in Linux.

So Linux was left with idealists that wanted to design the best possible system without having to worry about boring details like support and backwards compatibility.

Meanwhile, you can still run the 2001 Photoshop that came when XP was launched on Windows 8. And you can still run your old OSX apps on Mountain Lion.

Back in February I attended FOSDEM and two of my very dear friends were giggling out of excitement at their plans to roll out a new system that will force many apps to be modified to continue running. They have a beautiful vision to solve a problem that I never knew we had, and that no end user probably cares about, but every Linux desktop user will pay the price.

That day I stopped feeling guilty about my new found love for OSX.

Update September 2nd, 2012

Clearly there is some confusion over the title of this blog post, so I wanted to post a quick follow-up.

What I mean with the title is that Linux on the Desktop lost the race for a consumer operating system. It will continue to be a great engineering workstation (that is why I am replacing the hard disk in my system at home) and yes, I am aware that many of my friends use Linux on the desktop and love it.

But we lost the chance of becoming a mainstream consumer OS. What this means is that nobody is recommending a non-technical person go get a computer with Linux on it for their desktop needs (unless you are doing it so for idelogical reasons).

We had our share of chances. The best one was when Vista bombed in the marketplace. But we had our own internal battles and struggles to deal with. Some of you have written your own takes of our struggled in that period.

Today, the various Linux on the desktops are the best they have ever been. Ubuntu and Unity, Fedora and GnomeShell, RHEL and Gnome 2, Debian and Xfce plus the KDE distros. And yet, we still have four major desktop APIs, and about half a dozen popular and slightly incompatible versions of Linux on the desktop: each with its own curated OS subsystems, with different packaging systems, with different dependencies and slightly different versions of the core libraries. Which works great for pure open source, but not so much for proprietary code.

Shipping and maintaining apps for these rapidly evolving platforms is a big challenge.

Linux succeeded in other areas: servers and mobile devices. But on the desktop, our major feature and our major differentiator is price, but comes at the expense of having a timid selection of native apps and frequent breakage. The Linux Hater blog parodied this on a series of posts called the Greatest Hates.

The only way to fix Linux is to take one distro, one set of components as a baseline, abadone everything else and everyone should just contribute to this single Linux. Whether this is Canonical's Ubutu, or Red Hat's Fedora or Debian's system or a new joint effort is something that intelligent people will disagree until the end of the days.

Posted on 29 Aug 2012


« Newer entries | Older entries »