The Sophisticated Procrastinator - Volume 1

Let me share with you some links that I found interesting in the past few weeks. These should keep the most diligent person busy for a few hours.

Software Reads

Talbot Crowell's Introduction to F# 3.0 slides from Boston CodeCamp.

Bertrand Meyer (The creator of Eiffel, father of good taste in engineering practices) writes Fundamental Duality of Software Engineering: on the specifications and tests. This is one of those essays where every idea is beautifully presented. A must read.

Good article on weakly ordered CPUs.

MonkeySpace slide deck on MonoGame.

David Siegel shares a cool C# trick, switch expressions.

Oak: Frictionless development for ASP.NET MVC.

Simon Peyton Jones on video talks about Haskell, past, present and future. A very tasty introductory talk to the language. David Siegel says about this:

Simon Peyton-Jones is the most eloquent speaker on programming languages. Brilliant, funny, humble, adorable.

Rob Pike's talk on Concurrency is not Parallelism. Rob is one of the crisper minds in software development, anything he writes, you must read, everything he says, you must listen to.

Answering the question of what is the fastest way to access properties dynamically: DynamicMethod LINQ expressions, MethodInfo. Discussion with Eric Maupin.

OpenGL ES Quick Reference Card, plus a good companion: Apple's Programming Guide.

Interesting Software

SparkleShare, the open source file syncing service running on top of Git released their feature-complete product. They are preparing for their 1.0 release. SparkleShare runs on Linux, Mac and Windows. Check out their Release Notes.

Experts warn that Canonical might likely distribute a patched version that modifies your documents and spreadhseets to include ads and Amazon referal links.

Pheed a twitter competitor with a twist.

Better debugging tools for Google Native Client.

Touch Draw comes to MacOS, great vector drawing application for OSX. Good companion to Pixelmator and great for maintaining iOS artwork. It has great support for structured graphics and for importing/exporting Visio files.

MonoGame 3D on the Raspberry Pi video.

Fruit Rocks a fun little game for iOS.

@Redth, the one man factory of cool hacks has released:

  • PassKitSharp, a library to generate, maintain, process Apple's Passbook files written in C#
  • Zxing.Mobile, an open source barcode library built on top of ZXing (Zebra Crossing) runs on iOS and Android.
  • PushSharp, A server-side library for sending Push Notifications to iOS (iPhone/iPad APNS), Android (C2DM and GCM - Google Cloud Message), Windows Phone, Windows 8, and Blackberry devices.

Coding on Passbook: Lessons Learned.

Building a Better World

Phil Haack blogs about MonkeySpace

Patrick McKenzie writes Designing First Run Experiences to Delight Users.

Kicking the Twitter habit.

Twitter Q&A with TJ Fixman, writer for Insomniac Games.

Debunking the myths of budget deficits: Children and Grandchildren do not pay for budget deficits, they get interest on the bonds.

Cool Stuff

Live updates on HoneyPots setup by the HoneyNet Project.

Updated Programming F# 3.0, 2nd Edition is out. By Chris Smith, a delightful book on F# has been updated to cover the new and amazing type providers in F#.

ServiceStack now has 113 contributors.

News

From Apple Insider: Google may settle mobile FRAND patent antitrust claim.

The Salt Lake City Tribune editorial board endorses Obama over Romney:

In considering which candidate to endorse, The Salt Lake Tribune editorial board had hoped that Romney would exhibit the same talents for organization, pragmatic problem solving and inspired leadership that he displayed here more than a decade ago. Instead, we have watched him morph into a friend of the far right, then tack toward the center with breathtaking aplomb. Through a pair of presidential debates, Romney’s domestic agenda remains bereft of detail and worthy of mistrust.

Therefore, our endorsement must go to the incumbent, a competent leader who, against tough odds, has guided the country through catastrophe and set a course that, while rocky, is pointing toward a brighter day. The president has earned a second term. Romney, in whatever guise, does not deserve a first.

From Blue States are from Scandinavia, Red States are from Guatemala the author looks at the differences in policies in red vs blue states, and concludes:

Advocates for the red-state approach to government invoke lofty principles: By resisting federal programs and defying federal laws, they say, they are standing up for liberty. These were the same arguments that the original red-staters made in the 1800s, before the Civil War, and in the 1900s, before the Civil Rights movement. Now, as then, the liberty the red states seek is the liberty to let a whole class of citizens suffer. That’s not something the rest of us should tolerate. This country has room for different approaches to policy. It doesn’t have room for different standards of human decency.

Esquire's take on the 2nd Presidential Debate.

Dave Winer wrote Readings from News Execs:

There was an interesting juxtaposition. Rupert Murdoch giving a mercifully short speech saying the biggest mistake someone in the news business could make is thinking the reader is stupid. He could easily have been introducing the next speaker, Bill Keller of the NY Times, who clearly thinks almost everyone who doesn't work at the NY Times is stupid.

What do you know, turns out that Bill Moyers is not funded by the government nor does he get tax money, like many Republicans like people to believe. The correction is here.

Twitter Quotes

Joseph Hill

"Non-Alcoholic Sparkling Beverage" - Whole Foods' $7.99 name for "bottle of soda".

Jonathan Chambers

Problem with most religious people is that their faith tells them to play excellently in game of life, but they want to be the referees.

Hylke Bons on software engineering:

"on average, there's one bug for every 100 lines of code" this is why i put everything on one line

Waldo Jaquith:

If government doesn't create jobs, isn't Romney admitting that his campaign is pointless?

Alex Brown

OH "It is a very solid grey area" #sc34 #ooxml

Jo Shields

"I don't care how many thousand words your blog post is, the words 'SYMBIAN WAS WINNING' mean you're too high on meth to listen too.

Jeremy Scahill on war monger Max Boots asks the questions

Do they make a Kevlar pencil protector? Asking for a think tanker.

Max Boot earned a Purple Heart (shaped ink stain on his shirt) during the Weekly Standard War in 1994.

Tim Bray

"W3C teams with Apple, Google, Mozilla on WebPlatform"... or we could all just sponsor a tag on StackOverflow.

David Siegel

Most programmers who claim that types "get in the way" had a sucky experience with Java 12 years ago, tried Python, then threw the baby out.

Outrage Dept

How Hollywood Studios employ creative accounting to avoid sharing the profits with the participants. If you were looking at ways to scam your employees and partners, look no further.

Startvation in Gaza: State forced to release 'red lines' document for food consumption.

Dirty tricks and disturbing trends: Billionaire warn employees that if Obama is reelected, they will be facing layoffs.

Israeli Children Deported to South Sudan Succumb to Malaria:

Here we are today, three months later, and within the last month alone, these two parents lost two children, and the two remaining ones are sick as well. Sunday is already in hospital with malaria, in serious condition, and Mahm is sick at home. “I’ve only two children left,” Michael told me today over the phone. The family doesn’t have money to properly treat their remaining children. The hospitals are at full capacity and more people leave them in shrouds than on their own two feet. I ask you, beg of you to help me scream the story of these children and their fate, dictated by the heartless, immoral Israeli government.

When Suicide is Cheaper the horrifying tales of Americans that can not afford health care.

Paul Ryan is not that different from Todd Akin, when it comes to women rights.

Interesting Discussions, Opinions and Articles

A Windows 8 critique: someone is not very happy with it.

On "meritocracy": what is wrong with it.

Fascinating read on the fast moving space of companies: Intimate Portrait of Innovation, Risk and Failure Through Hipstamatic's Lens.

Kathy Sierra discusses sexism in the tech world. How she changed her mind about it, and the story that prevented her from seeing it.

Response @antirez's sexist piece.

Chrystia Freeland's The Self Destruction of the 1% percent is a great article, which touches on the points of her book of Plutocrats

Sony's Steep Learning Process a look at the changing game with a focus on Sony's challenges.

Entertainment

One Minute Animated Primers on Major Theories on Religion.

Cat fact and Gif provides Cat facts with a gif that goes with it. The ultimate resource of cat facts and gifs.

Posted on 21 Oct 2012 by Miguel de Icaza

Drowning Good Ideas with Bloat. The tale of pkg.m4.

The gnome-config script was a precursor to pkg-config, they are tools used that you can run and use to extract information about the flags needed to compile some code, link some code, or check for a version. gnome-config itself was a pluggable version of Tcl's tclConfig.sh script.

The idea is simple: pkg-config is a tiny little tool that uses a system database of packages to provide version checking and build information to developers. Said database is merely a well-known directory in the system containing files with the extension ".pc", one per file.

These scripts are designed to be used in shell scripts to probe if a particular software package has been installed, for example, the following shell script probes whether Mono is installed in the system:

# shell script

if pkg-config --exists mono; then
    echo "Found Mono!"
else
    echo "You can download mono from www.go.mono.com"
fi

It can also be used in simple makefiles to avoid hardcoding the paths to your software. The following makefile shows how:

CFLAGS = `pkg-config --cflags mono`
LIBS   = `pkg-config --libs mono`

myhost: myhost.c

And if you are using Automake and Autoconf to probe for the existence of a module with a specific version and extract the flags needed to build a module, you would do it like this:

AC_SUBST(MONO_FLAGS)
AC_SUBST(MONO_LIBS)
if pkg-config --atleast-version=2.10 mono; then
    MONO_FLAGS=`pkg-config --cflags mono`
    MONO_LIBS=`pkg-config --libs mono`
else
   AC_MSG_ERROR("You need at least Mono 2.10")
fi

There are two main use cases for pkg-config.

Probing: You use the tool to pobe for some condition about a package and taking an action based on this. For this, you use the pkg-config exit code in your scripts to determine whether the condition was met. This is what both the sample automake and the first script show.

Compile Information: You invoke the tool which outputs to standard output the results. To store the result or pass the values, you use the shell backtick (`). That is all there is to it (example: version=`pkg-config --version`).

The tool is so immensely simple that anyone can learn every command that matters in less than 5 minutes. The whole thing is beautiful because of its simplicity.

The Siege by the Forces of Bloat

Perhaps it was a cultural phenomenon, perhaps someone that had nothing better to do, perhaps someone that was just trying to be thorough introduced one of the most poisoning memes into the pool of ideas around pkg-config.

Whoever did this, thought that the "if" statement in shell was a complex command to master or that someone might not be able to find the backtick on their keyboards.

And they hit us, and they hit us hard.

They introduced pkg.m4, a macro intended to be used with autoconf, that would allow you to replace the handful of command line flags to pkg-config with one of their macros (PKG_CHECK_MODULES, PKG_CHECK_EXISTS). To do this, they wrote a 200 line script, which replaces one line of shell code with almost a hundred. Here is a handy comparison of what these offer:

# Shell style
AC_SUBST(MONO_LIBS)
AC_SUBST(MONO_CFLAGS)
if pkg-config --atleast-version=2.10 mono; then
   MONO_CFLAGS=`pkg-config --cflags mono`
   MONO_LIBS=`pkg-config --libs mono`
else
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
fi

#
# With the idiotic macros
#
PKG_CHECK_MODULES([MONO], [mono >= 2.10],[], [
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
])

#
# If you do not need split flags, shell becomes shorter
#
if pkg-config --atleast-version=2.10 mono; then
   CFLAGS="$CFLAGS `pkg-config --cflags mono`"
   LIBS="$LIBS `pkg-config --libs mono`"
else
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
fi

The above shows the full benefit of using a macro, MONO is a prefix that will have LIBS and CFLAGS extracted. So the shell script looses. The reality is that the macros only give you access to a subset of the functionality of pkg-config (no support for splitting -L and -l arguments, querying provider-specific variable names or performing macro expansion).

Most projects, adopted the macros because they copy/pasted the recipe from somewhere else, and thought this was the right way of doing things.

The hidden price is that saving that few lines of code actually inflicts a world of pain on your users. You will probably see this in your forums in the form of:

Subject: Compilation error

I am trying to build your software, but when I run autogen.sh, I get
the following error:

checking whether make sets $(MAKE)... yes
checking for pkg-config... /usr/bin/pkg-config
./configure: line 1758: syntax error near unexpected token FOO,'
./configure: line 1758:PKG_CHECK_MODULES(FOO, foo >= 2.9)'

And then you will engage in a discussion that in the best case scenario helps the user correctly configure his ACLOCAL_FLAGS, create his own "setup" script that will properly configure his system, and your new users will learn the difference between running a shell script and "sourcing" a shell script to properly setup his development system.

In the worst case scenario, the discussion will devolve into how stupid your user is for not knowing how to use a computer and how he should be shot in the head and taken out to the desert for his corpse to be eaten by vultures; because, god dammit, they should have googled that on their own, and they should have never in the first place have installed two separate automake installations in two prefixes, without properly updating their ACLOCAL_FLAGS or figured out on their own that their paths were wrong in the first place. Seriously, what moron in this day and age is not familiar with the limitations of aclocal and the best practices to use system-wide m4 macros?

Hours are spent on these discussions every year. Potential contributors to your project are driven away, countless hours that could have gone into fixing bugs and producing code are wasted, you users are frustrated. And you saved 4 lines of code.

The pkg.m4 is a poison that is holding us back.

We need to end this reign of terror.

Send pull requests to eliminate that turd, and ridicule anyone that suggests that there are good reasons to use it. In the war for good taste, it is ok to vilify and scourge anyone that defends pkg.m4.

Posted on 20 Oct 2012 by Miguel de Icaza

Why Mitt does not need an Economic Plan

Mitt Romney does not need to have an economic plan. He does not need to have a plan to cut the deficit or to cut services.

It is now well understood that to get the US out of the recession, the government has to inject money into the economy. To inject money into the economy, the US needs to borrow some money and spend it. Borrowing is also at an all-time low, so the price to pay is very low.

Economists know this, and Republicans know this.

But the Republicans top priority is to get Obama out of office at any cost. Even at the cost of prolonging the recession, damaging the US credit rating and keeping people unemployed.

The brilliance of the Republican strategy is that they have convinced the world that the real problem facing the US is the debt. Four years of non-stop propaganda on newspapers and TV shows have turned everyone into a "fiscal conservative". The propaganda efforts have succeeded into convincing the world that US economic policy should be subject to the same laws of balancing a household budget (I wont link to this idiocy).

The campaign has been a brilliant and has forced the Democrats to adopt policies of austerity, instead of policies of growth. Instead of spending left and right to create value, we are cutting. And yet, nobody has stopped the propaganda and pointed out that growth often comes after spending money. Startups start in the red and are funded for several years before they become profitable; Companies go public and use the IPO to raise capital to grow, and for many years they lose money until their investments pay off and allows them to turn the tide.

So this mood has forced Obama to talk about cuts. He needs to be detailed about his cuts, he needs to be a fiscal conservative.

But Economists and Republicans know what the real fix is. They know they have to spend money.

If Romney is elected to office, he will do just that. He will borrow and spend money, because that is the only way of getting out of the recession. That is why his plan does not need to have any substance, and why he can ignore the calls to get more details, because he has no intention to follow up with them.

Obama made a critical mistake in his presidency. He decided to compromise with Republicans, he was begging to be loved by Republicans and in the process betrayed his own base and played right into the Republican's plans.

Posted on 04 Oct 2012 by Miguel de Icaza

Mono 2.11.4 is out

A couple of weeks ago we released Mono 2.11.4; I had not had time to blog about it.

Since our previous release a month before, we had some 240 commits, spread like this:

488 files changed, 28716 insertions(+), 6921 deletions(-)

Among the major updates in this release:

  • Integrated the Google Summer of Code code for Code Contracts.
  • Integrated the Google Summer of Code code for TPL's DataFlow.
  • Plenty of networking stack fixes and updates (HTTP stack, web services stack, WCF)
  • Improvements to the SGen GC.
  • TPL fixes for AOT systems like the iPhone.
  • Debugger now supports batched method invocations.

And of course, a metric ton of bug fixes all around.

Head over to Mono's Download Page to get the goods. We would love to hear about any bugs to have a great stable release.

Posted on 02 Oct 2012 by Miguel de Icaza

TypeScript: First Impressions

Today Microsoft announced TypeScript a typed superset of Javascript. This means that existing Javascript code can be gradually modified to add typing information to improve the development experience: both by providing better errors at compile time and by providing code-completion during development.

As a language fan, I like the effort, just like I pretty much like most new language efforts aimed at improving developer productivity: from C#, to Rust, to Go, to Dart and to CoffeeScript.

A video introduction from Anders was posted on Microsoft's web site.

The Pros

  • Superset of Javascript allows easy transition from Javascript to typed versions of the code.
  • Open source from the start, using the Apache License.
  • Strong types assist developers catch errors before the deploy the code, this is a very welcome addition to the developer toolchest. Script#, Google GWT and C# on the web all try to solve the same problem in different ways.
  • Extensive type inference, so you get to keep a lot of the dynamism of Javascript, while benefiting from type checking.
  • Classes, interfaces, visibility are first class citizens. It formalizes them for those of us that like this model instead of the roll-your-own prototype system.
  • Nice syntactic sugar reduces boilerplate code to explicit constructs (class definitions for example).
  • TypeScript is distributed as a Node.JS package, and it can be trivially installed on Linux and MacOS.
  • The adoption can be done entirely server-side, or at compile time, and requires no changes to existing browsers or runtimes to run the resulting code.

Out of Scope

Type information is erased when it is compiled. Just like Java erases generic information when it compiles, which means that the underling Javascript engine is unable to optimize the resulting code based on the strong type information.

Dart on the other hand is more ambitious as it uses the type information to optimize the quality of the generated code. This means that a function that adds two numbers (function add (a,b) { return a+b;}) can generate native code to add two numbers, basically, it can generate the following C code:

double add (double a, double b)
{
    return a+b;
}

While weakly typed Javascript must generated something like:

JSObject add (JSObject a, JSObject b)
{
    if (type (a) == typeof (double) &&
	type (b) == typeof (double))
	return a.ToDouble () + b.ToDouble ();
    else
	JIT_Compile_add_with_new_types ();
}

The Bad

The majority of the Web is powered by Unix.

Developers use MacOS and Linux workstations to write the bulk of the code, and deploy to Linux servers.

But TypeScript only delivers half of the value in using a strongly typed language to Unix developers: strong typing. Intellisense, code completion and refactoring are tools that are only available to Visual Studio Professional users on Windows.

There is no Eclipse, MonoDevelop or Emacs support for any of the language features.

So Microsoft will need to convince Unix developers to use this language merely based on the benefits of strong typing, a much harder task than luring them with both language features and tooling.

There is some basic support for editing TypeScript from Emacs, which is useful to try the language, but without Intellisense, it is obnoxious to use.

Posted on 01 Oct 2012 by Miguel de Icaza

Free Market Fantasies

This recording of a Q&A with Noam Chomsky in 1997 could be a Q&A session done last night about bailouts, corporate wellfare, and the various distractions that they use from keeping us in the dark, like caring about "fiscal responsibility".

Also on iTunes and Amazon.

Posted on 07 Sep 2012 by Miguel de Icaza

2012 Update: Running C# on the Browser

With our push to share the kernel of your software in reusable C# libraries and build a native experience per platform (iOS, Android, WP7 on phones and WPF/Windows, MonoMac/OSX, Gtk/Linux) one component that is always missing is what about doing a web UI that also shares some of the code.

Until very recently the answer was far from optimal, and included things like: put the kernel on the server and use some .NET stack to ship the HTML to the client.

Today there are two solid choices to run your C# code on the browser and share code between the web and your native UIs.

JSIL

JSIL will translate the ECMA/.NET Intermediate Language into Javascript and will run your code in the browser. JSIL is pretty sophisticated and their approach at running IL code on the browser also includes a bridge that allows your .NET code to reference web page elements. This means that you can access the DOM directly from C#.

You can try their Try JSIL page to get a taste of what is possible.

Saltarelle Compiler

The Saltarelle Compiler takes a different approach. It is a C# 4.0 compiler that generates JavaScript instead of generating IL. It is interesting that this compiler is built on top of the new NRefactory which is in turn built on top of our C# Compiler as a Service.

It is a fresh, new compiler and unlik JSIL it is limited to compiling the C# language. Although it is missing some language features, it is actively being developed.

This compiler was inspired by Script# which is a C#-look-alike language that generated Javascript for consuming on the browser.

Native Client

I left NativeClient out, which is not fair, considering that both Bastion and Go Home Dinosaurs are both powered by Mono running on Native Client.

The only downside with Native Client today is that it does not run on iOS or Android.

Posted on 06 Sep 2012 by Miguel de Icaza

What Killed the Linux Desktop

True story.

The hard disk that hosted my /home directory on my Linux machine failed so I had to replace it with a new one. Since this machine lives under my desk, I had to unplug all the cables, get it out, swap the hard drives and plug everything back again.

Pretty standard stuff. Plug AC, plug keyboard, plug mouse but when I got to the speakers cable, I just skipped it.

Why bother setting up the audio?

It will likely break again and will force me to go on a hunting expedition to find out more than I ever wanted to know about the new audio system and the drivers technology we are using.

A few days ago I spoke to Klint Finley from Wired who wrote the article titled OSX Killed Linux. The original line of questioning was about my opinion between Gnome 3's shell, vs Ubuntu's Unity vs Xfte as competing shells.

Personally, I am quite happy with Gnome Shell, I think the team that put it together did a great job, and I love how it enabled the Gnome designers -which historically only design, barely hack- to actually extend the shell, tune the UI and prototype things without having to beg a hacker to implement things for them. It certainly could use some fixes and tuning, but I am sure they will address those eventually.

What went wrong with Linux on the Desktop

In my opinion, the problem with Linux on the Desktop is rooted in the developer culture that was created around it.

Linus, despite being a low-level kernel guy, set the tone for our community years ago when he dismissed binary compatibility for device drivers. The kernel people might have some valid reasons for it, and might have forced the industry to play by their rules, but the Desktop people did not have the power that the kernel people did. But we did keep the attitude.

The attitude of our community was one of engineering excellence: we do not want deprecated code in our source trees, we do not want to keep broken designs around, we want pure and beautiful designs and we want to eliminate all traces of bad or poorly implemented ideas from our source code trees.

And we did.

We deprecated APIs, because there was a better way. We removed functionality because "that approach is broken", for degrees of broken from "it is a security hole" all the way to "it does not conform to the new style we are using".

We replaced core subsystems in the operating system, with poor transitions paths. We introduced compatibility layers that were not really compatible, nor were they maintained. When faced with "this does not work", the community response was usually "you are doing it wrong".

As long as you had an operating system that was 100% free, and you could patch and upgrade every component of your operating system to keep up with the system updates, you were fine and it was merely an inconvenience that lasted a few months while the kinks were sorted out.

The second dimension to the problem is that no two Linux distributions agreed on which core components the system should use. Either they did not agree, the schedule of the transitions were out of sync or there were competing implementations for the same functionality.

The efforts to standardize on a kernel and a set of core libraries were undermined by the Distro of the Day that held the position of power. If you are the top dog, you did not want to make any concessions that would help other distributions catch up with you. Being incompatible became a way of gaining market share. A strategy that continues to be employed by the 800 pound gorillas in the Linux world.

To sum up: (a) First dimension: things change too quickly, breaking both open source and proprietary software alike; (b) incompatibility across Linux distributions.

This killed the ecosystem for third party developers trying to target Linux on the desktop. You would try once, do your best effort to support the "top" distro or if you were feeling generous "the top three" distros. Only to find out that your software no longer worked six months later.

Supporting Linux on the desktop became a burden for independent developers.

But at this point, those of us in the Linux world still believed that we could build everything as open source software. The software industry as a whole had a few home runs, and we were convinced we could implement those ourselves: spreadsheets, word processors, design programs. And we did a fine job at that.

Linux pioneered solid package management and the most advance software updating systems. We did a good job, considering our goals and our culture.

But we missed the big picture. We alienated every third party developer in the process. The ecosystem that has sprung to life with Apple's OSX AppStore is just impossible to achieve with Linux today.

The Rise of OSX

When OSX was launched it was by no means a very sophisticated Unix system. It had an old kernel, an old userland, poor compatibility with modern Unix, primitive development tools and a very pretty UI.

Over time Apple addressed the majority of the problems with its Unix stack: they improved compatibility, improved their kernel, more open source software started working and things worked out of the box.

The most pragmatic contributors to Linux and open source gradually changed their goals from "an world run by open source" to "the open web". Others found that messing around with their audio card every six months to play music and the hardships of watching video on Linux were not worth that much. People started moving to OSX.

Many hackers moved to OSX. It was a good looking Unix, with working audio, PDF viewers, working video drivers, codecs for watching movies and at the end of the day, a very pleasant system to use. Many exchanged absolute configurability of their system for a stable system.

As for myself, I had fallen in love with the iPhone, so using a Mac on a day-to-day basis was a must. Having been part of the Linux Desktop efforts, I felt a deep guilt for liking OSX and moving a lot of my work to it.

What we did wrong

Backwards compatibility, and compatibility across Linux distributions is not a sexy problem. It is not even remotely an interesting problem to solve. Nobody wants to do that work, everyone wants to innovate, and be responsible for the next big feature in Linux.

So Linux was left with idealists that wanted to design the best possible system without having to worry about boring details like support and backwards compatibility.

Meanwhile, you can still run the 2001 Photoshop that came when XP was launched on Windows 8. And you can still run your old OSX apps on Mountain Lion.

Back in February I attended FOSDEM and two of my very dear friends were giggling out of excitement at their plans to roll out a new system that will force many apps to be modified to continue running. They have a beautiful vision to solve a problem that I never knew we had, and that no end user probably cares about, but every Linux desktop user will pay the price.

That day I stopped feeling guilty about my new found love for OSX.

Update September 2nd, 2012

Clearly there is some confusion over the title of this blog post, so I wanted to post a quick follow-up.

What I mean with the title is that Linux on the Desktop lost the race for a consumer operating system. It will continue to be a great engineering workstation (that is why I am replacing the hard disk in my system at home) and yes, I am aware that many of my friends use Linux on the desktop and love it.

But we lost the chance of becoming a mainstream consumer OS. What this means is that nobody is recommending a non-technical person go get a computer with Linux on it for their desktop needs (unless you are doing it so for idelogical reasons).

We had our share of chances. The best one was when Vista bombed in the marketplace. But we had our own internal battles and struggles to deal with. Some of you have written your own takes of our struggled in that period.

Today, the various Linux on the desktops are the best they have ever been. Ubuntu and Unity, Fedora and GnomeShell, RHEL and Gnome 2, Debian and Xfce plus the KDE distros. And yet, we still have four major desktop APIs, and about half a dozen popular and slightly incompatible versions of Linux on the desktop: each with its own curated OS subsystems, with different packaging systems, with different dependencies and slightly different versions of the core libraries. Which works great for pure open source, but not so much for proprietary code.

Shipping and maintaining apps for these rapidly evolving platforms is a big challenge.

Linux succeeded in other areas: servers and mobile devices. But on the desktop, our major feature and our major differentiator is price, but comes at the expense of having a timid selection of native apps and frequent breakage. The Linux Hater blog parodied this on a series of posts called the Greatest Hates.

The only way to fix Linux is to take one distro, one set of components as a baseline, abadone everything else and everyone should just contribute to this single Linux. Whether this is Canonical's Ubutu, or Red Hat's Fedora or Debian's system or a new joint effort is something that intelligent people will disagree until the end of the days.

Posted on 29 Aug 2012 by Miguel de Icaza

Mono 2.11.3 is out

This is our fourth preview release of Mono 2.11. This version includes Microsoft's recently open sourced EntityFramework and has been updated to match the latest .NET 4.5 async support.

We are quite happy with over 349 commits spread like this:

 514 files changed, 15553 insertions(+), 3717 deletions(-)

Head over to Mono's Download Page to get the goods.

Posted on 13 Aug 2012 by Miguel de Icaza

Hiring: Documentation Writer and Sysadmin

We are growing our team at Xamarin, and we are looking to hire both a documentation writer and a system administrator.

For the documentation writer position, you should be both familiar with programming and API design and be able to type at least 70 wpms (you can check your own speed at play.typeracer.com). Ideally, you would be based in Boston, but we can make this work remotely.

For the sysadmin position, you would need to be familiar with Unix system administration. Linux, Solaris or MacOS would work and you should feel comfortable with automating tasks. Knowledge of Python, C#, Ruby is a plus. This position is for working in our office in Cambridge, MA.

If you are interested, email me at: miguel at xamarin.

Posted on 11 Aug 2012 by Miguel de Icaza

XNA on Windows 8 Metro

The MonoGame Team has been working on adding Windows 8 Metro support to MonoGame.

This will be of interest to all XNA developers that wanted to target the Metro AppStore, since Microsoft does not plan on supporting XNA on Metro, only on the regular desktop.

The effort is taking place on IRC in the #monogame channel on irc.gnome.org. The code is being worked in the develop3d branch of MonoGame.

Posted on 19 Apr 2012 by Miguel de Icaza

Contributing to Mono 4.5 Support

For a couple of weeks I have been holding off on posting about how to contribute to Mono, since I did not have a good place to point people to.

Gonzalo has just updated our Status pages to include the differences betwee .NET 4.0 to .NET 4.5, these provide a useful roadmap for features that should be added to Mono.

This in particular in the context of ASP.NET 4.5, please join us in mono-devel-list@lists.ximian.com.

Posted on 13 Apr 2012 by Miguel de Icaza

Modest Proposal for C#

This is a trivial change to implement, and would turn what today is an error into useful behavior.

Consider the following C# program:

struct Rect {
	public int X, Y, Width, Height;
}

class Window {
	Rect bounds;

	public Rect Bounds {
		get { return bounds; }
		set {
			// Some code that needs to run when the	property is set
			WindowManager.Invalidate (bounds);
			WindowManager.Invalidate (value);
			bounds = value;
		}
	}
}

Currently, code like this:

Window w = new Window ();
w.Bounds.X = 10;

Produces the error:

Cannot modify the return value of "Window.Bounds.X" because it is not a variable

The reason is that the compiler returns a copy of the "bounds" structure and making changes to the returned value has no effect on the original property.

If we had used a public field for Bounds, instead of a property, the above code would compile, as the compiler knows how to get to the "Bounds.X" field and set its value.

My suggestion is to alter the C# compiler to turn what today is considered an error when accessing properties and doing what the developer expects.

The compiler would rewrite the above code into:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
w.Bounds = tmp;

Additionally, it should cluster all of the changes done in a single call, so:

Window w = new Window ();
w.Bounds.X = 10;
w.Bounds.Y = 20;

Will be compiled as:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
tmp.Y = 20;
w.Bounds = tmp;

To avoid calling the setter for each property set in the underlying structure.

The change is trivial and wont break any existing code.

Posted on 11 Apr 2012 by Miguel de Icaza

Can JITs be faster?

Herb Sutter discusses in his Reader QA: When Will Better JITs save Managed Code?:

In the meantime, short answer: C++ and managed languages make different fundamental tradeoffs that opt for either performance or productivity when they are in tension.

[...]

This is a 199x/200x meme that’s hard to kill – “just wait for the next generation of (JIT or static) compilers and then managed languages will be as efficient.” Yes, I fully expect C# and Java compilers to keep improving – both JIT and NGEN-like static compilers. But no, they won’t erase the efficiency difference with native code, for two reasons.

First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency. (This is the opposite of C++, which has added a lot of productivity-oriented features like auto and lambdas in the latest standard, but never at the expense of performance efficiency.) In particular, managed languages chose to incur costs even for programs that don’t need or use a given feature; the major examples are assumption/reliance on always-on or default-on garbage collection, a virtual machine runtime, and metadata.

This is a pretty accurate statement on the difference of the mainstream VMs for managed languages (.NET, Java and Javascript).

Designers of managed languages have chosen the path of safety over performance for their designs. For example, accessing elements outside the boundaries of an array is an invalid operation that terminates program execution, as opposed to crashing or creating an exploitable security hole.

But I have an issue with these statements:

Second, even if JIT were the only big issue, a JIT can never be as good as a regular optimizing compiler because a JIT compiler is in the business of being fast, not in the business of generating optimal code. Yes, JITters can target the user’s actual hardware and theoretically take advantage of a specific instruction set and such, but at best that’s a theoretical advantage of NGEN approaches (specifically, installation-time compilation), not JIT, because a JIT has no time to take much advantage of that knowledge, or do much of anything besides translation and code gen.

In general the statement is correct when it comes to early Just-in-Time compilers and perhaps reflects Microsoft's .NET JIT compiler, but this does not apply to state of the art JIT compilers.

Compilers are tools that convert human readable text into machine code. The simplest ones perform straight forward translations from the human readable text into machine code, and typically go through or more of these phases:

Optimizing compilers introduce a series of steps that alter their inputs to ensure that the semantics described by the user are preserved while generating better code:

An optimization that could be performed on the high-level representation would transform the textual "5 * 4" in the source code into the constant 20. This is an easy optimization that can be done up-front. Simple dead code elimination based on constant folding like "if (1 == 2) { ... }" can also be trivially done at this level.

An optimization on the medium representation would analyze the use of variables and could merge subexpressions that are computed more than once, for example:

	int j = (a*b) + (a*b)

Would be transformed by the compiler into:

	int _tmp = a * b;
	int j = _tmp + _tmp;

A low-level optimization would alter a "MULTIPLY REGISTER-1 BY 2" instruction into "SHIFT REGISTER-1 ONE BIT TO THE LEFT".

JIT compilers for Java and .NET essentially break the compilation process in two. They serialize the data in the compiler pipeline and split the process in two. The first part of the process dumps the result into a .dll or .class files:

The second step loads this file and generates the native code. This is similar to purchasing frozen foods from the super market, you unwrap the pie, shove it in the oven and wait 15 minutes:

Saving the intermediate representation and shipping it off to a new system is not a new idea. The TenDRA C and C++ compilers did this. These compilers saved their intermediate representation into an architecture neutral format called ANDF, similar in spirit to .NET's Common Intermediate Language and Java's bytecode. TenDRA used to have an installer program which was essentially a compiler for the target architecture that turned ANDF into native code.

Essentially JIT compilers have the same information than a batch compiler has today. For a JIT compiler, the problem comes down to striking a balance between the quality of the generated code and the time it takes to generate the code.

JIT compilers tend to go for fast compile times over quality of the generated code. Mono allows users to configure this threshold by allowing users to pick the optimization level defaults and even lets them pick LLVM to perform the heavy duty optimizations on the code. Slow, but the generated code quality is the same code quality you get from LLVM with C.

Java HotSpot takes a fascinating approach: they do a quick compilation on the first pass, but if the VM detects that a piece of code is being used a lot, the VM recompiles the code with all the optimization turned on and then they hot-swap the code.

.NET has a precompiler called NGen, and Mono allows the --aot flag to be passed to perform the equivalent process that TenDRA's installer did. They precompile the code tuned for the current hardware architecture to avoid having the JIT compiler spend time at runtime translating .NET CIL code to native code.

In Mono's case, you can use the LLVM optimizing compiler as the backend for precompiling code, which produces great code. This is the same compiler that Apple now uses on Lion and as LLVM improves, Mono's generated code improves.

NGen has a few limitations in the quality of the code that it can produce. Unlike Mono, NGen acts merely as a pre-compiler and tests suggest that there are very limited extra optimizations applied. I believe NGen's limitations are caused by .NET's Code Access Security feature which Mono never implemented [1].

[1] Mono only supports the CoreCLR security system, but that is an opt-in feature that is not enabled for desktop/server/mobile use. A special set of assemblies are shipped to support this.

Optimizing JIT compilation for Managed Languages

Java, JavaScript and .NET have chosen a path of productivity and safety over raw performance.

This means that they provide automatic memory management, arrays bounds checking and resource tracking. Those are really the elements that affect the raw performance of these languages.

There are several areas in which managed runtimes can evolve to improve their performance. They wont ever match the performance of hand-written assembly language code, but here are some areas that managed runtimes can work on to improve performance:

>Alias analysis is simpler as arrays are accessed with array operations instead of pointer arithmetic.

Intent: with the introduction of LINQ in C#, developers can shift their attention from how a particular task is done to expressing the desired outcome of an operation. For example:

var biggerThan10 = new List;
for (int i = 0; i < array.Length; i++){
    if (array [i] > 10)
       biggerThan10.Add (i);
}	
	

Can be expressed now as:

var biggerThan10 = x.Where (x => x > 10).Select (x=>x);
	
// with LINQ syntax:
var biggerThan10 = from x in array where x > 10 select x;

Both managed compilers and JIT compilers can take advantage of the rich information that is preserved to turn the expressed intent into an optimized version of the code.

Extend VMs: Just like Javascript was recently extended to support strongly typed arrays to improve performance, both .NET and Java can be extended to allow fewer features to be supported at the expense of safety.

.NET could allow developers to run without the CAS sandbox and without AppDomains (like Mono does).

Both .NET and Java could offer "unsafe" sections that would allow performance critical code to not enforce arrays-bounds-optimization (at the expense of crashing or creating a security gap, this can be done today in Mono by using -O=unsafe).

.NET and Mono could provide allocation primitives that allocate objects on a particular heap or memory pool:

	var pool = MemoryPool.Allocate (1024*1024);

	// Allocate the TerrainMesh in the specified memory pool
	var p = new pool, TerrainMesh ();

	[...]
	
	// Release all objects from the pool, all references are
	// nulled out
	//
	Assert.NotEquals (p, null);
	pool.Destroy ();
	Assert.Equals (p, null);
	

Limiting Dynamic Features: Current JIT compilers for Java and .NET have to deal with the fact that code can be extended dynamically by either loading code at runtime or generating code dynamically.

HotSpot leverages its code recompiled to implement sophisticated techniques to perform devirtualization safely.

On iOS and other platforms it is not possible to generate code dynamically, so code generators could trivially devirtualize, inline certain operations and drop features from both their runtimes and the generated code.

More Intrinsics: An easy optimization that JIT engines can do is map common constructs into native features. For example, we recently inlined the use of ThreadLocal<T> variables. Many Math.* methods can be inlined, and we applied this technique to Mono.SIMD.

Posted on 04 Apr 2012 by Miguel de Icaza

Microsoft's new Open Sourced Stacks

Yesterday Microsoft announced that another component of .NET would be open sourced. The entire ASP.NET MVC stack is now open source, including the Razor Engine, System.Json, Web API and WebPages.

With this release, they will start accepting external contributions to these products and will be running the project like other open source projects are.

Mono and the new Stacks

We imported a copy of the git tree from Codeplex into GitHub's Mono organization in the aspnetwebstack module.

The mono module itself has now taken a dependency on this module, so the next time that you run autogen.sh in Mono, you will get a copy of the aspnetwebstack inside Mono.

As of today, we replaced our System.Json implementation (which was originally built for Moonlight) and replaced it with Microsoft's implementation.

Other libraries like Razor are next, as those are trivially imported into Mono. But ASP.NET MVC 4 itself will have to wait since it depends on extending our own core ASP.NET stack to add asynchronous support.

Our github copy will contain mostly changes to integrate the stack with Mono. If there are any changes worth integrating upstream, we will submit the code directly to Microsoft for inclusion. If you want to experiment with ASP.NET Web Stack, you should do this with your own work and work directly with the upstream maintainers.

Extending Mono's ASP.NET Engine

The new ASP.NET engine has been upgraded to support C# 5.0 asynchronous programming and this change will require a number of changes to the core ASP.NET.

We currently are not aware of anyone working on extending our ASP.NET core engine to add these features, but those of us in the Mono world would love to assist enthusiastic new developers of people that love async programming to bring these features to Mono.

Posted on 28 Mar 2012 by Miguel de Icaza
« Newer entries | Older entries »
This is a personal web page. Things said here do not represent the position of my employer.