Kristian Rietveld picked up on the challenge to write a WHIRL to CIL compiler. This means that any languages supported by the GCC WHIRL fork (C, C++ and the two Fortrans) could be used as compiler front-ends.
Kris has a sample of his work so far: a small C program translated to WHIRL (gcc), then to CIL (he had to write his own WHIRL reader.
A sweet interview with Nat on the Novell Linux Desktop.
Man, I spent some quality time going over the JBoss soap opera. I always felt things were not quite right when I was reading TheServerSide.
Posted on 19 May 2004
Not everyone gets the importance of free software at once. To me the point of no return came when I was using non-Intel Linux machines, and could not run any of the new proprietary Linux software on the Alpha host we had.
The Movable Type licensing term changes have done the same for a new generation. Dive into Mark's Freedom 0 article rings true. The same kind of feeling we had when we started the Gnome project, we decided to not look away with something that was `free enough'.
Posted on 17 May 2004
After five months of delays, Massimiliano Mantione has joined the Novell Mono team. His focus will be on compiler optimizations.
As part of his interview process (which lasted a month), Massimiliano completed the arrays bounds check elimination feature for the JIT compiler. Now that he is part of the Novell staff he is redoing his work to match the current VM.
Plenty of good feedback from the Mono Beta 1. Duncan has been taking care of the packaging things, and as of today we are shipping zip files with all the packages for those of you who did not want to manually download all of the packages one by one.
Posted on 14 May 2004
Congratulations to everyone involved!
Posted on 11 May 2004
Posted on 07 May 2004
I tried it out, and I was able to play the music I had purchased on Linux, it is really nice.
Johansen was on the #mono channel, and we got a chance to look into some of the performance issues in Rijandel. Ben noticed something interesting: plenty of our array access was done with a byte value on an array that always had at least 255 elements. So he cooked up a patch that does said ABC elimination.
A more comprehensive patch has been cooked by Massimiliano (who joined Novell today to work on Mono) which hopefully will make it before Mono Beta 1.
A series of dates for the Mono 1.0 have been posted.
Posted on 26 Apr 2004
Jeff seems to like Cringley's statement of "The central point was that paying too much attention to Microsoft simply allows Microsoft to define the game. And when Microsoft gets to define the game, they ALWAYS win."
A nice statement, but nothing more than a nice statement, other than that, its all incorrect.
Microsoft has won in the past due to many factors, and none of them related to `Let them define the game', a couple from a list of many:
In 1993-1994, Linux had the promise of becoming the best desktop system. We had real multi-tasking, real 32-bit OS. Client and Server in the same system: Linux could be used as a server (file sharing, web serving), we could run DOS applications with dosemu. We had X11: could run applications remotely on a large server, and display on small machine. Linux quickly became a vibrant innovative community, and with virtual-desktops in our window managers, we could do things ten times as fast as Windows users! TeX was of course `much better than Windows, since it focuses on the content and the logical layout' and for those who did not like that, there was always the "Andrew" word processor. Tcl/Tk was as good as building apps with QuickBasic.
And then Microsoft released Windows 95.
The concensus at that time? Whatever Microsoft is doing is just a thin layer on top of COM/DCOM/Windows DNA which to most of us means `same old, same old, we are innovating!'.
And then Microsoft comes up with .NET.
Does something like XAML matter? Not really. But it makes it simple to create relatively cute apps, by relatively newby users, in the same way anyone could build web pages with HTML.
Does Avalon really matter? Its a cute toolkit, with tons of widgetry, but nothing that we cant do on a weekend, right?
Does the fact that its built on top of .NET matter? Well, you could argue it has some productivity advantages, security features and get into a long discussion of .NET vs Java, but its besides the point.
Everyone is arguing about tiny bits of the equation `We have done that with Glade before!', `Gtk/Qt are cross-platform!', `We can get the same with good language bindings!', `We already have the widgets!', `Cairo is all we need', `What do users really want?' and of course `Dont let them define the game!'.
They are all fine points of view, but what makes Longhorn dangerous for the viability of Linux on the desktop is that the combination of Microsoft deployment power, XAML, Avalon and .NET is killer. It is what Java wanted to do with the Web, but with the channel to deploy it and the lessons learned from Java mistakes.
The combination means that Longhorn apps get the web-like deployment benefits: develop centrally, deploy centrally, and safely access any content with your browser.
The sandboxed execution in .NET  means that you can visit any web site and run local rich applications as oppposed to web applications without fearing about your data security: spyware, trojans and what have you.
Avalon means also that these new "Web" applications can visually integrate with your OS, that can use native dialogs can use the functionality in the OS (like the local contact picker).
And building fat-clients is arguably easier than building good looking, integrated, secure web applications (notice: applications, not static web pages).
And finally, Longhorn will get deployed, XAML/Avalon applications will be written, and people will consume them. The worst bit: people will expect their desktop to be able to access these "rich" sites. With 90% market share, it seems doable.
Will Avalon only run on Longhorn? Maybe. But do not count on that. Microsoft built IE4 for Windows 98, and later backported it to Windows 95, Windows 3.11 and moved it to HP-UX and Solaris.
The reason people are genuinely concerned and are discussing these issues is because they do not want to be caught sleeping at the wheel again.
Will this be the end of the world for Linux and the Mac? Not likely, many of us will continue using our regular applications, and enjoy our nicely usable and consistent desktops, but it will leave us out of some markets (just like it does today).
Btw, the Mozilla folks realized this already
 Although it was easy to see why .NET supported the Code Access Security (CAS) in .NET 1.0, there was no real use for it. With Longhorn/Avalon/XAML it becomes obvious why it was implemented.
Although some of the discussion has centered around using a native toolkit like Gtk+/XUL to build a competitor that would have ISV sex-appeal, this is not a good foundation as it wont give us Web-like deployment (we need a stack that can be secured to run untrusted applications, and we need to be able to verify the code that is downloaded, which leaves us with Java or .NET).
The time is short, Microsoft will ship Avalon in 2-3 years, and they got a preview of the technology out.
I see two possible options:
I think someone will eventually implement Avalon (with or without the assistance of the Mono team), its just something that developers enjoy doing.
If we choose to go in our own direction, there are certain strengths in open source that we should employ to get to market quickly: requirements, design guidelines, key people who could contribute, compatibility requirements and deployment platforms.
We have been referring internally at Novell to the later approach as the Salvador platform (after a long debate about whether it should be called MiggyLon or Natalon).
We do not know if and when we would staff such an effort but its on the radar.
Are there patents in Avalon? It is likely that Microsoft will try to get some patents on it, but so far there are little or no new inventions on Avalon:
Posted on 24 Apr 2004
I will be doing the keynote at the Usenix Virtual Machine Research and Technology Symposium on May 6 and 7 in California next month.
Am also attending the Usenix Conference in Boston this summer.
Nice new search engine from Amazon.
Posted on 19 Apr 2004
Greg was nice enough to point me to the PathScale compiler suite: a high performance compiler suite for AMD64. Pathscale's compiler is based on Open64, but has reportedly updated its C/C++ compiler frontends to a recent GCC (as opposed to the current Open64 derivative compilers).
OpenScale is lead by Fred Chow, an ex-SGI developer that created WHIRL, and later Pro64.
So as a starting point for a Managed C++ compiler, PathScale sources might be a better option.
This product shows one of the splits that people were afraid of to circumvent the GPL: the front-ends which are fairly complex (C++) has been split out from the backend and a proprietary highly optimizing compiler has been developed for the AMD64. It was just a matter of time before someone did this though.
Posted on 18 Apr 2004
Let me explain why Open64 is the best starting point for implementing Managed C++.
First, the requirements:
Since C++ is a language of titanic dimensions, it is not one that you want to reimplement. Your best choice is to pick an existing C++ compiler. In the case of the free software community, that means gcc or any of its forks.
The question is whether GCC's internal IR can be retargetted to produce code for the stack-based CIL and whether you can propagate the extra available metadata. The later seems like a problem that we would have in both Open64 and gcc.
Now what makes Open64 interesting is the fact that we can achieve the first goal without touching GCC: C and C++ compilation would be performed with the Open64 compiler down to WHIRL and then a new generic WHIRL-to-CIL compiler is produced that generates the files we want. We do not have to mess or touch any of the existing GCC internals (it is a difficult code base to work with).
The above is very similar to IKVM's JVM to CIL compiler: the input "language" is an IR, the output language is a different IR.
The fact that Open64 does not target the x86 in a way is irrelevant, because we are not interested in targeting the x86. We are interested in targeting the CIL.
If we were to use the current GCC, we would have to intercept a good stage in the compiler, and most likely deal with RTL and produce bytecodes for CIL. The RTL is hard to penetrate and deeply linked to gcc internals. WHIRL is independent, well documented, and has various tools to process, consume and analyze the various WHIRL stages.
Finally there is the point of the FSF and the GCC maintainers refusing to make structural changes to GCC on philosophical grounds. A split that would encourage the creation of proprietary front-end and back-end systems.
Not only does this mean that its better to work with the Open64 fork of GCC which has already made this pragmatic decision and is a better foundation for targeting the CIL, but our goals are more aligned than those of the GCC maintainers.
The latest version of Open64 folded in the changes from Intel and ICT.
Posted on 15 Apr 2004