I have not really formed an opinion, and am definitely not an expert on the matter of binary serialization for XML, but I believe that Omri's position is an incorrect one. Omri's thesis is that there are multiple things that you might want to opmize for: size, parsing speed and overhead for generating the data and that it is not possible to define a file format that satisfies all of those different needs.
This is not new, and this happens with everything we build today: as software developers we constantly have to balance multiple needs: maintainability, performance, extensibility, perfection, scalability, configurability, usability. The job of an architect is precisely to find a good balance taking input from multiple sources. We are going to make mistakes, every step of the way, but the sooner we start making those mistakes, the earlier we will be able to collect real data on the issues at hand.
The other problem with Omri's thesis is that it misses one bit: who are those likely to benefit from a binary format? They probably will fall in two camps: those who want smaller chunks of data transfered, and those who want faster encoding/decoding of the infoset not the average XML user.
We do not need to satisfy everyone, just a large percentage of the user base. The others can feel free to define a different format, or ignore this solution altogether or build on top of this in the future.
For instance TCP/IP makes no guarantees on quality of service despite the fact that many people have valid and important uses for it. Still, it did serve a large community of people and what is even more interesting: clever people found ways to work with these limitations and do a reasonable job given the foundation.