Callbacks as our Generations' Go To Statement

by Miguel de Icaza

This week, as I was preparing my presentation on C# sung on iOS and Android, I came to the realization that callback-based programming has somehow become an acceptable programming model.

It has become an acceptable programming model, in the same way that using IF/GOTO to write programs was an acceptable model back in the 60's. They are acceptable because we did not have anything better to replace them.

Today, both C# and F# support a programming model that will do to our generation what structured programming, high-level languages and control flow primitives did to the developers in the 60's.

Sadly, many developers when they hear the word "C# async" immediately think "I have callbacks in my language already". Variations of this include "Promises are better", "Futures is the way to go", "Objective-C has blocks" and "the operating system provides that". All of these statements are made by people that have yet to study C# async or to grasp what it does.

This is my attempt at explaining why the C# async model is such a leap forward for developers.

Callbacks are a Band Aid

Callbacks have improved significantly over the years. In the pure C days, if you wanted to use callbacks, you would have to write code like this:

void cback (void *key, void *value, void *user_state)
{
	// My data is stored in the user_state, fetch it
	// in this case, just a simple int.

	int *sum = (int *) user_state;

	*sum = *sum + *(int *)value;
}

int sum_values (Hashtable *hash)
{
	int sum = 0;

	hash_table_foreach (hash, cback, &sum);
	return sum;
}

Developers would have to pass around these pointers to the state that they managed manually, which is just very cumbersome.

Today with languages that support lambdas, you can write code instead that can capture the state, so things like the above become:

int sum_values (Hashtable hash)
{
	int sum = 0;
	hash.foreach ((key, value) => { sum += value; });
	return sum;
}

Lambdas have made writing code a lot simpler, and now we see this on UI applications that use events/lambdas to react to user input and Javascript apps on the browser and the client that use callbacks to get their job done.

In Node.js's case the original idea was to scale a server by removing blocking operations and instead offering a pure callback-driven model. For desktop applications, often you want to chain operations "on response to a click, download a file, then uncompress it, then save it to the location specified by the user", all while interleaving some bits of user interface and background operation.

This leads to nested callbacks after callbacks, where each indentation level is executing at some point in the future. Some people refer to this as Callback Hell.

During this week preparation, Marco Arment happened to tweet this:

This is a common idiom. On our web site, when we launched Async, we shared this sample:

private void SnapAndPost ()
{
    Busy = true;
    UpdateUIStatus ("Taking a picture");
    var picker = new Xamarin.Media.MediaPicker ();
    var picTask = picker.TakePhotoAsync (new Xamarin.Media.StoreCameraMediaOptions ());
    picTask.ContinueWith ((picRetTask) => {
        InvokeOnMainThread (() => {
            if (picRetTask.IsCanceled) {
                Busy = false;
                UpdateUIStatus ("Canceled");
            } else {
                var tagsCtrl = new GetTagsUIViewController (picRetTask.Result.GetStream ());
                PresentViewController (tagsCtrl, true, () => {
                    UpdateUIStatus ("Submitting picture to server");
                    var uploadTask = new Task (() => {
                        return PostPicToService (picRetTask.Result.GetStream (), tagsCtrl.Tags);
                    });
                    uploadTask.ContinueWith ((uploadRetTask) => {
                        InvokeOnMainThread (() => {
                            Busy = false;
                            UpdateUIStatus (uploadRetTask.Result.Failed ? "Canceled" : "Success");
                        });
                    });
                    uploadTask.Start ();
                });
            }
        });
    });
}

The problem with these nested callbacks is that you can see very quickly that these are not code bases you want to be working with. Currently it does some basic error handling, but it does not even attempt to do some better error recovery.

Thinking about extending the above functionality makes me pause, perhaps there is something else I can do to avoid patching the above function?

And if I wanted to do better error recovery or implement a better workflow I can see myself annoyed at both the bookkeeping that I need to do, make sure that "Busy" value is property updated on every possible exit (and possible exits I add).

This is ugly to the point that your mind starts to wander "perhaps there is a new article on hacker news" or "did a new cat get posted on catoverflow.com?".

Also notice that in the sample above there is a context switching that takes place on every lambda: from background threads to foreground threads. You can imagine a real version of this function being both larger, getting more features and accumulating bugs in corner cases that were not easily visible.

And the above reminded me of Dijkstra's Go To Statement Considered Harmful. This is what Dijkstra's had to say in the late 60's about it:

For a number of years I have been familiar with the observation that the quality of programmers is a decreasing function of the density of go to statements in the programs they produce. More recently I discovered why the use of the go to statement has such disastrous effects, and I became convinced that the go to statement should be abolished from all "higher level" programming languages (i.e. everything except, perhaps, plain machine code). At that time I did not attach too much importance to this discovery; I now submit my considerations for publication because in very recent discussions in which the subject turned up, I have been urged to do so.

My first remark is that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the "making" of the corresponding process is delegated to the machine.

My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible.

And this is exactly the kind of problem that we are facing with these nested callbacks that cross boundaries.

Just like in the Go To days, or the days of manual memory management, we are turning into glorified accountants. Check every code path for the proper state to be properly reset, updated, disposed, released.

Surely as a species we can do better than this.

And this is precisely where C# async (and F#) come in. Every time you put the word "await" in your program, the compiler interprets this as a point in your program where execution can be suspended while some background operation takes place. The instruction just in front of await becomes the place where execution resumes once the task has completed.

The above ugly sample, then becomes:

private async Task SnapAndPostAsync ()
{
    try {
        Busy = true;
        UpdateUIStatus ("Taking a picture");
        var picker = new Xamarin.Media.MediaPicker ();
        var mFile = await picker.TakePhotoAsync (new Xamarin.Media.StoreCameraMediaOptions ());
        var tagsCtrl = new GetTagsUIViewController (mFile.GetStream ());
        // Call new iOS await API
        await PresentViewControllerAsync (tagsCtrl, true);
        UpdateUIStatus ("Submitting picture to server");
        await PostPicToServiceAsync (mFile.GetStream (), tagsCtrl.Tags);
        UpdateUIStatus ("Success");
    } catch (OperationCanceledException) {
        UpdateUIStatus ("Canceled");
    } finally {
        Busy = false;
    }
}	

The compiler takes the above code and rewrites it for you. There is no longer a direct mapping between each line of code there to what the compiler produces. This is similar to what happens with C# iterators or even lambdas.

The above looks pretty linear. And now, I can see myself feeling pretty confident about changing the flow of execution. Perhaps using some conditional code that triggers different background processes, or using a loop to save the picture in various locations, or applying multiple filters at once. Jeremie has a nice post that happens to do this.

Notice that the handling of that annoying "Busy" flag is now centralized thanks to the finally clause. I can now guarantee that the variable is always properly updated, regardless of the code path in that program and the updates that take place to that code.

I have just delegated the bookkeeping to the compiler.

Async allows me to think about my software in the very basic terms that you would see in a flow-chart. Not as a collection of tightly coupled and messy processes with poorly defined interfaces.

Mind Liberation

The C# compiler infrastructure for Async is actually built on top of the Task primitive. This Task class is what people in other languages refer to as futures, promises or async. This is probably where the confusion comes from.

At this point, I consider all of these frameworks (including the one in .NET) just as lower-level plumbing.

Tasks encapsulate a unit of work and they have a few properties like their execution state, results (if completed), and exceptions or errors that might have been thrown. And there is a rich API used to combine tasks in interesting ways: wait for all, wait for some, combine multiple tasks in one and so on.

They are an important building block and they are a big improvement over rolling your own idioms and protocols, but they are not the means to liberate your mind. C# async is.

Frequently Asked Questions about Async

Today I had the chance to field a few questions, and I wanted to address those directly on my blog:

Q: Does async use a new thread for each operation?

A: When you use Async methods, the operation that you requested is encapsulated in a Task (or Task<T>) object. Some of these operations might require a separate thread to run, some might just queue an event for your runloop, some might use kernel asynchronous APIs with notifications. You do not really know what an Async method is using behind the scenes, that is something that the author of each user will pick.

Q: It seems like you no longer need to use InvokeOnMainThread when using Async, why?

A: When a task completes, the default is for execution to be resumed on the current synchronization context. This is a thread-local property that points to a specific implementation of a SynchronizationContext.

On iOS and Android, we setup a synchronization context on the UI thread that will ensure that code resumes execution on the main thread. Microsoft does the same for their platforms.

In addition, on iOS, we also have a DispatchQueue sync context, so by default if you start an await call on a Grand Central Dispatch queue, then execution is resumed on that queue.

You can of course customize this. Use SynchronizationContext and ConfigureAwait for this.

Finally, the PFX team has a good FAQ for Async and Await.

Async Resources

Here are some good places to learn more about Async:

Posted on 15 Aug 2013


Richard Dawkins should revisit the letter to his 10 year old daughter

by Miguel de Icaza

Like Richard Dawkins, I am also an atheist. I have also enjoyed his books and I am peripherally aware of his atheist advocacy.

Recently a letter Richard Dawkins wrote to his 10 year old daughter made the rounds.

He needs to ammend the letter and explain to her that it is not enough to find evidence, it is also important to be able to reason effectively about this evidence and avoid a series of logical pitfalls.

He failed to do this in a series of poorly thought out tweets, starting with this:

He followed up with a series of tweets to try to both explain the above as well as retweeting various people that came out in his defense with statements like:

I found the entire episode unbecoming of a scientist.

His original tweet, while true, does not have the effect of trying to advance our understanding of the world. It is at best a troll.

We expect from scientists to use the processes, techniques and tools of science math and logic to advance our understanding of the world, not resort to innuendo, fallacies and poor logical constructions to prove our points.

Correlation Does Not Imply Causation

Among others, I do not expect a scientist to imply that correlation implies causation. Which is what this tweet did.

Today he posted a large follow up where he explains what lead him to make this statement and also to selectively address some of the the criticism he received. He addressed the simpler criticism, but left out the meaty ones (you can find them on the replies to his tweet).

Dawkins failed to address the major problem with his tweet, which was exactly the use of correlation to imply causation.

Instead, he digs down deeper:

Twitter's 140 character limit always presents a tough challenge, but I tried to rise to it. Nobel Prizes are a pretty widely quoted, if not ideal, barometer of excellence in science. I thought about comparing the numbers of Nobel Prizes won by Jews (more than 120) and Muslims (ten if you count Peace Prizes, half that if you don't). This astonishing discrepancy is rendered the more dramatic when you consider the small size of the world's Jewish population. However, I decided against tweeting that comparison because it might seem unduly provocative (many Muslim "community leaders" are quite outspoken in their hatred of Jews) and I sought a more neutral comparison as more suitable to the potentially inflammable medium of Twitter. It is a remarkable fact that one Cambridge college, Trinity, has 32 Nobel Prizes to its credit. That's three times as many as the entire Muslim world even if you count Peace Prizes, six times as many if you don't. I dramatised the poverty of Muslim scientific achievement, and the contrast with their achievements in earlier centuries, in the following brief tweet: "All the world's Muslims have fewer Nobel Prizes than Trinity College, Cambridge. They did great things in the Middle Ages, though."

Now we know that Richard was not merely stating a couple of facts on his original tweet. He was trying to establish a relationship between a religion and scientific progress.

One possible explanation that does not involve Muslim-hood is that the majority of muslims live in impoverished nations (see map). Poverty and access to resources are likely bigger reasons for the lack of advancement in the sciences than belonging to a particular religion.

Is my theory better than Richard's? We could test the theory by looking at the list of Nobel laureates per country.

Let us consider my country, Mexico, a poor country compared to the wealth of the UK. We have twice as many people living in Mexico compared to the UK. Sadly, we only have three Nobel laureates vs Trinity College's thirty two.

If we expand the scope to Latin America which has half a billion people. Even with this, we can only muster sixteen laureates vs Trinity's 32.

Let us look into the African continent, with its billion people. They manage to only score 20 Nobel laureates.

And shockingly, the wealthier the nation, the more laureates. South Africa accounts for half of Africa's laureates (with ten), Egypt which has a better economy than most other African nations and gets tasty American aid gets five, which leaves another five for the rest of the continent.

If I had some axe to grind against Mexicans, Spanish speakers, Africans, Muslims, Bedouins, Brazilians, or Latin Americans I could probably make a statement as truthful as Richard's original tweet, which could be as offensive to those popuations and just like Richard prove absolutely nothing.

I think we have stronger evidence that access to wealth has an effect on how many people get this award than a religion.

The second flaw in his argument is to identify a University with access to funds, and a fertile ground for research and education to a group of people linked only by religion.

Perhaps people go to places like Trinity College becasue it is a fertile ground for research and education. If that is the case, then we have an explanation for why Trinity might have more Nobel laureates.

Luckily, Cesar Hidalgo's research on understanding prosperity shows what we intuitively know: that economic development clusters around existing centers. That is why actors, writers and directors move to LA, financiers move to New York and why companies ship their high-end phone manufacturing to China. You go where there is a fertile ground. Richard, instead of reading the long papers from Cesar, you might want to watch this 17 minute presentation he did at TEDx Boston.

So is Trinity one of these clusters? Can we find other clusters of research and expect them to have a high concentration of Nobel prize laureates? Let me pick two examples, MIT which is next door to my office has 78 laureates and I used to hang out at Berkeley because my mom graduated from there, and they have 22.

So we have three universities with 132 Nobel laureates.

The following statement is just as true as Richard's original tweet, and as pointless as his. Except I do not malign a religion:

All the world's companies have fewer Nobel Prizes than Universities do. Companies did great things in the Middle Ages though.

In fact there is a universe of different segments of the population that have fewer Nobel Prizes as Trinity. And every once in a while someone will try to make connections just like Richard did.

People will make their cases against groups of people based on language, race, sexual preferences, political orientation, food preferences, religion or even what video games they play.

We can not let poor logic cloud our judgement, no matter how importants our points are.

I agree with Richard that I want less religion in this world, and more science-based education. But if we are going to advocate for more science-based education, let us not resort to the very processes that are discredited by science to do so.

The Origins of the Tweet

We now know that Richard could just not stomach someone saying "Islamic science deserves enormous respect" and this is why he launched himself into this argument.

I can only guess that this happened because he was criticizing religion or Islam and someone told him "Actually, you are wrong about this, Islam contributed to X and Y" and he did not like his argument poked at.

The right answer is "You are correct, I did not consider that" and then try to incorporate this new knowledge into having a more nuanced position.

The answer is not to spread a meme based on a fallacy.

Posted on 09 Aug 2013


CFKArgentina Live-Tweets Evo Morales Airplane Crisis

by Miguel de Icaza

Last night Cristina Kirchner, Argentina's President, live tweeted the events around Evo Morales' European airplane hijacking:

  • I returned from la Rosada, Olivos 21:46 hs. They notify me, President Correa is on the phone. “Rafael? Put him on” [link]
  • “Hey Rafa, how are you?” He tells me he is angry and anguished, “You do not know what is happening?” [link]
  • “No, what is happening?”. I was in lala-land. Odd, because I am always paying attention... and viglant. I had just finished a meeting. [link]
  • “Cristina, they detained Evo with his airplane and they won’t let him leave Europe” [link]
  • “What? Evo? Evo Morales has been detained?” Immediately, I remember the last picture I saw with him, in Rusia [link]
  • Next to Putin, Nicolas Maduro and other Chiefs of State. “But what happened Rafael?” [link]
  • “Multiple countries revoked the flight permit and he is in Vienna”, he replies [link]
  • Definitely, everyone is crazy. A Chief of State and his airplane have total immunity. This level of impunity is unacceptable. [link]
  • Rafal tells me that he will call urgently a Ollanta Umala for an urgent meeting of UNASUR [link]
  • I call Evo. On the other line, his voice responds calmly “Hello friend, how are you doing?”. He asks me how I am doing? [link]
  • He has thousands of years of civilization more than me. He describes the situation. “I am here, in a small room in the airport...” [link]
  • “I am not going to let them inspect my airplane. I am not a thief”. Simply perfect. Strength Evo. [link]
  • CFK: “Let me call the Cancilleria. I want to see the jurisdiction, agreements and which court to go to. I’ll call you later”. “Thank you friend”. [link]
  • They confirm the absolute immunity by the right consuetudionario, embodied on the convention of 2004 and the Haye Court. [link]
  • If Austria does not let him leave, or wants to inspect his airplane, he can go to the international court of the Haya and ask.... [link]
  • Yes! AN INJUCTION. I dont’t know if I should laugh or cry. You realize what Injunctions are used for [link]
  • Well, if not, we can send a judge from here. MOther of God! What world! [link]
  • I call Evo again. His Minister of Defense takes note. In Austria it is 3am. They are going to try to reach the authorities [link]
  • I talk with Pepe (Mujica). He is outraged. He is right. The whole thing is very humiliating. Rafa calls me again [link]
  • Ollanta notifies me that he will call for a meeting of UNASUR. It is 00:25AM. Tomorrow will be a day long and hard. Calm. They wont succeed. [link]

Posted on 03 Jun 2013


Need for Exercises

by Miguel de Icaza

For many years, I have learned various subjects (mostly programming related, like languages and frameworks) purely by reading a book, blog posts or tutorials on the subjects, and maybe doing a few samples.

In recent years, I "learned" new programming languages by reading books on the subject. And I have noticed an interesting phenomenon: when having a choice between using these languages in a day-to-day basis or using another language I am already comfortable with, I go for the language I am comfortable with. This, despite my inner desire to use the hot new thing, or try out new ways of solving problems.

I believe the reason this is happening is that most of the texts I have read that introduce these languages are written by hackers and not by teachers.

What I mean by this is that these books are great at describing and exposing every feature of the language and have some clever examples shown to you, but none of these actually force you to write code in the language.

Compare this to Scheme and the book "Structure and Interpretation of Computer Programs". That book is designed with teaching in mind, so at the end of every section where a new concept has been introduced, the authors have a series of exercises specifically tailored to use the knowledge that you just gained and put it to use. Anyone that reads that book and does the exercises is going to be a guaranteed solid Scheme programmer, and will know more about computing than from reading any other book.

In contrast, the experience of reading a modern computing book from most of the high-tech publishers is very different. Most of the books being published do not have an educator reviewing the material, at best they have an editor that will fix your English and reorder some material and make sure the proper text is italicized and your samples are monospaced.

When you finish a chapter in a modern computing book, there are no exercises to try. When you finish it, your choices are to either take a break by checking some blogs or keep marching in a quest to collect more facts on the next chapter.

During this process, while you amass a bunch of information, at some neurological level, you have not really mastered the subject, nor gained the skills that you wanted. You have merely collected a bunch of trivia which most likely you will only put to use in an internet discussion forum.

What books involving an educator will do is include exercises that have been tailored to use the concepts that you just learned. When you come to this break, instead of drifting to the internet you can sit down and try to put your new knowledge to use.

Well developed exercises are an application of the psychology of Flow ensuring that the exercise matches the skills that you have developed and they guide you through a path that keeps you in an emotional state ranging that includes control, arousement and joy (flow).

Anecdote Time

Back in 1988 when I first got the first edition of the "C++ Language", there were a couple of very simple exercises in the first chapter that took me a long time to get right and they both proved very educational.

The first exercises was "Compile Hello World". You might think, that is an easy one, I am going to skip that. But I had decided that I was going to do each and every single of one of the exercises in the book, no matter how simple. So if the exercise said "Build Hello World", I would build Hello World, even if I was already seasoned assembly language programmer.

It turned out that getting "Hello World" to build and run was very educational. I was using the Zortech C++ compiler on DOS back, and getting a build turned out to be almost impossible. I could not get the application to build, I got some obscure error and no way to fix it.

It took me days to figure out that I had the Microsoft linker in my path before the Zortech Linker, which caused the build to fail with the obscure error. An important lesson right there.

On Error Messages

The second exercise that I struggled with was a simple class. The simple class was missing a semicolon at the end. But unlike modern compilers, the Zortech C++ compiler at the time error message was less than useful. It took a long time to spot the missing semicolon, because I was not paying close enough attention.

Doing these exercises trains your mind to recognize that "useless error message gobble gobble" actually means "you are missing a semicolon at the end of your class".

More recently, I learned in this same hard way that the F# error message "The value or constructor 'foo' is not defined" really means "You forgot to use 'rec' in your let", as in:

let foo x =
   if x == 1
     1
   else
     foo (x-1)

That is a subject for another post, but the F# error message should tell me what I did wrong at a language level, as opposed to explaining to me why the compiler is unable to figure things out in its internal processing of the matter.

Plea to book authors

Nowadays we are cranking books left and right to explain new technologies, but rarely do these books get the input from teachers and professional pedagogues. So we end up accumulating a lot of information, we sound lucid at cocktail parties and might even engage in a pointless engineering debate over features we barely master. But we have not learned.

Coming up with the ideas to try out what you have just learned is difficult. As you think of things that you could do, you quickly find that you are missing knowledge (discussed in further chapters) or your ideas are not that interesting. In my case, my mind drifts into solving other problems, and I go back to what I know best.

Please, build exercises into your books. Work with teachers to find the exercises that match the material just exposed and help us get in the zone of Flow.

Posted on 25 Apr 2013


Introducing MigCoin

by Miguel de Icaza

Non-government controlled currency systems are now in vogue. Currencies that are not controlled by some government that might devalue your preciously earned pesos at the blink of an eye.

BitCoin is powered by powerful cryptography and math to ensure a truly digital currency. But it poses significant downsides, for one, governments can track your every move, and every transaction is stored on each bitcoin, making it difficult to prevent a tax audit in the future by The Man.

Today, I am introducing an alternative currency system that both keeps the anonymity of your transactions, and is even more secure than the crypto mumbo jumbo of bitcoins.

Today, I am introducing the MigCoin.

Like bitcoins, various MigCoins will be minted over time, to cope with the creation of value in the world.

Like bitcoins, the supply of MigCoins will be limited and will eventually plateau. Like bitcoin, the MigCoin is immune to the will of some Big Government bureaucrat that wants to control the markets by printing or removing money from circulation. Just like this:

Projected number of Bitcoins and MigCoins over time.

Unlike bitcoins, I am standing by them and I am not hiding behind a false name.

Like BitCoins, MigCoins come with a powerful authentication system that can be used to verify their authenticity. Unlike BitCoins, they do not suffer from this attached "log" that Big Brother and the Tax Man can use to come knocking on your door one day.

How does this genius of a currency work? How can you guarantee that governments or rogue entities wont print their own MigCoins?

The answer is simple my friends.

MigCoins are made of my DNA material.

Specifically, spit.

Every morning, when I wake up, for as long as I remain alive, I will spit on a glass. A machine will take the minimum amount of spit necessary to lay down on a microscope slide, and this is how MigCoins are minted.

Then, you guys send me checks, and I send you the microscope slides with my spit.

To accept MigCoins payments all you have to do is carry a DNA sequencer with you, put the microscope slide on it, press a button, and BAM! 10 minutes later you have your currency validated.

To help accelerate the adoption of MigCoins, I will be offering bundles of MigCoins with the Ilumina MiSeq Personal DNA sequencer:

Some might argue that the machine alone is 125,000 dollars and validating one MigCoin is going to set me back 750 dollars.

Three words my friends: Economy of Scale.

We are going to need a few of you to put some extra pesos early on to get the prices to the DNA machines down.

Early Adopters of MigCoins

I will partner with visionaries like these to get the first few thousands sequencers built and start to get the prices down. Then we will hire that guy ex-Apple guy that was CEO of JC Penney to get his know-how on getting the prices of these puppies down.

Like Bitcoin, I expect to see a lot of nay-sayers and haters. People that will point out flaws on this system. But you know what?

The pace of innovation can not be held back by old-school economists that "don't get it" and pundits on CNN trying to make a quick buck. Hater are going to hate. 'nuff said.

Next week, I will be launching MigXchange, a place where you can trade your hard BitCoins for slabs of spit.

Join the revolution! Get your spit on!

Posted on 12 Apr 2013


Exclusive! What we know about the Facebook Phone

by Miguel de Icaza

We obtained some confidential information about the upcoming Facebook Phone. Here is what we know about it so far:

Posted on 29 Mar 2013


How I ended up with Mac

by Miguel de Icaza

While reading Dave Winer's Why Windows Lost to Mac post, I noticed many parallels with my own experience with Linux and the Mac. I will borrow the timeline from Dave's post.

I invested years of my life on the Linux desktop first as a personal passion (Gnome) and when while awoken for two Linux companies (my own, Ximian and then Novell). During this period, I believed strongly in dogfooding our own products. I believed that both me and my team had to use the software we wrote and catch bugs and errors before it reached our users. We were pretty strict about it: both from an ideological point of view, back in the days of all-software-will-be-free, and then practically - during my tamer business days. I routinely chastised fellow team members that had opted for the easy path and avoided our Linux products.

While I had Macs at Novell (to support Mono on MacOS), it would take a couple of years before I used a Mac regularly. In some vacation to Brazil around 2008 or so, I decided to only take the Mac for the trip and learn to live with the OS as a user, not just as a developer.

Computing-wise that three week vacation turned out to be very relaxing. Machine would suspend and resume without problem, WiFi just worked, audio did not stop working, I spent three weeks without having to recompile the kernel to adjust this or that, nor fighting the video drivers, or deal with the bizarre and random speed degradation that my ThinkPad suffered.

While I missed the comprehensive Linux toolchain and userland, I did not miss having to chase the proper package for my current version of Linux, or beg someone to package something. Binaries just worked.

From this point on, using the Mac was a part-time gig for me. During the Novell layoffs, I returned my laptop to Novell and I was left with only one Linux desktop computer at home. I purchased a Mac laptop and while I fully intended to keep using Linux, the dogfooding driver was no longer there.

Dave Winer writes, regarding Windows:

Back to 2005, the first thing I noticed about the white Mac laptop, that aside from being a really nice computer, there was no malware. In 2005, Windows was a horror. Once a virus got on your machine, that was pretty much it. And Microsoft wasn't doing much to stop the infestation. For a long time they didn't even see it as their problem. In retrospect, it was the computer equivalent of Three Mile Island or Chernobyl.

To me, the fragmentation of Linux as a platform, the multiple incompatible distros, and the incompatibilities across versions of the same distro were my Three Mile Island/Chernobyl.

Without noticing, I stopped turning on the screen for my Linux machine during 2012. By the time I moved to a new apartment in October of 2012, I did not even bother plugging the machine back and to this date, I have yet to turn it on.

Even during all of my dogfooding and Linux advocacy days, whenever I had to recommend a computer to a single new user, I recommended a Mac. And whenever I gave away computer gifts to friends and family, it was always a Mac. Linux just never managed to cross the desktop chasm.

Posted on 05 Mar 2013


The Making of Xamarin Studio

by Miguel de Icaza

We spent a year designing the new UI and features of Xamarin Studio (previously known as MonoDevelop).

I shared some stories of the process on the Xamarin blog.

After our launch, we open sourced all of the work that we did, as well as our new Gtk+ engine for OSX. Lanedo helps us tremendously making Gtk+ 2.x both solid and amazing on OSX (down to the new Lion scrollbars!). All of their work has either been upstreamed to Gtk+ or in the process of being upstreamed.

Posted on 22 Feb 2013


"Reality Distortion Field"

by Miguel de Icaza

"Reality Distortion Field" is a modern day cop out. A tool used by men that lack the intellectual curiosity to explain the world, and can deploy at will to explain excitement or success in the market place. Invoking this magical super power saves the writer from doing actual work and research. It is a con perpetuated against the readers.

The expression originated as an observation made by those that worked with Steve to describe his convincing passion. It was insider joke/expression which has now been hijacked by sloppy journalists when any subject is over their head.

The official Steve Jobs biography left much to be desired. Here a journalist was given unprecedented access to Steve Jobs and get answers to thousands of questions that we have to this day. How did he approach problems? Did he have a method? How did he really work with his team? How did he turn his passion for design into products? How did he make strategic decisions about the future of Apple? How did the man balance engineering and marketing problems?

The biography has some interesting anecdotes, but fails to answer any of these questions. The biographer was not really interested in understanding or explaining Steve Jobs. He collected a bunch of anecdotes, stringed them together in chronological order, had the text edited and cashed out.

Whenever the story gets close to an interesting historical event, or starts exploring a big unknown of Steve's work, we are condescendingly told that "Steve Activated the Reality Distortion Field".

Every. Single. Time.

Not once did the biographer try to uncover what made people listen to Steve. Not once did he try to understand the world in which Steve operated. The breakthroughs of his work are described with the same passion as a Reuters news feed: an enumeration of his achievements glued with anecdotes to glue the thing together.

Consider the iPhone: I would have loved to know how the iPhone project was conceived. What internal process took place that allowed Apple to gain the confidence to become a phone manufacturer. There is a fascinating story of the people that made this happen, millions of details of how this project was evaluated and what the vision for the project was down to every small detail that Steve cared about.

Instead of learning about the amazing hardware and software engineering challenges that Steve faced, we are told over and over that all Steve had to do was activate his special super power.

The biography in short, is a huge missed opportunity. Unprecedented access to a man that reshaped entire industries and all we got was some gossip.

The "Reality Distortion Field" is not really a Steve Jobs super-power, it is a special super power that the technical press uses every time they are too lazy to do research.

Why do expensive and slow user surveys, or purchase expensive research from analysts to explain why some product is doing well, or why people are buying it when you can just slap a "they activated the Reality Distortion Field and sales went through the roof" statement in your article.

As of today, a Google News search for "Reality Distortion Field Apple" reports 532 results for the last month.

Perhaps this is just how the tech press must operate nowadays. There is just no time to do research as new products are being unveiled around the clock, and you need to deliver opinions and analysis on a daily basis.

But as readers, we deserve better. We should reject these explanations for what they are: a cheap grifter trick.

Posted on 07 Nov 2012


Mono 3.0 is out

by Miguel de Icaza

After a year and a half, we have finally released Mono 3.0.

Like I discussed last year, we will be moving to a more nimble release process with Mono 3.0. We are trying to reduce our inventory of pending work and get new features to everyone faster. This means that our "master" branch will remain stable from now on, and that large projects will instead be developed in branches that are regularly landed into our master branch.

What is new

Check our release notes for the full details of this release. But here are some tasty bits:

  • C# Async compiler
  • Unified C# compiler for all profiles
  • 4.5 Async API Profile
  • Integrated new Microsoft's Open Sourced stacks:
    • ASP.NET MVC 4
    • ASP.NET WebPages
    • Entity Framework
    • Razor
    • System.Json (replaces our own)
  • New High performance Garbage Collector (SGen - with many performance and scalability improvements)
  • Metric ton of runtime and class library improvements.

Also, expect F# 3.0 to be bundled in our OSX distribution.

Posted on 22 Oct 2012


« Newer entries | Older entries »