ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button
Article:
  WinFX: An All-Managed API
Subject:   .NET, Windows API and C
Date:   2004-04-03 05:30:22
From:   igriffiths
Response to: .NET, Windows API and C

Quoting from Apple's Introduction
to Mac OS X Development Technologies
here's what they say about Carbon:


"The Carbon environment provides fine-grained procedural APIs in C and C++ that are intended for developers who are migrating applications from classic Mac OS to Mac OS X."


They go on to identify Carbon as being primarily for use when you need to support both Mac OS 9 and Mac OS X.


But they describe Cocoa first, and they describe it in much more glowing terms. Anyone new to Apple development reading that page would inevitably think that Cocoa is the best way of developing for OS X, and that Carbon is for migration of existing code, and backwards compatibility with OS 9.


The message from that page - the page where Apple sets out its developer technologies for those new to Apple - is clear: Carbon is there as a link to the past. A fully-supported and powerful link, but wholly rooted in the past nonetheless. What were you reading that gave you a different impression?


"They just don't force their developers to learn the same things in a new way, loosing productivity"


This seems to be a theme in this thread - you always seem to equate 'learning new things' with 'losing productivity'. In my experience this is often not the case. Learning new ways of doing things often helps you do things more efficiently. The .NET Framework is a case in point.


(This reminds me of a talk given a while back by David Chappell. Someone asked him if .NET was the last time Microsoft were going to ask developers to learn new stuff. He replied saying "If you can't stand change, get out of the software business.")



"I think all necessary features are present in C to produce any kind of code, it's just about good and bad API design. "


There are several reasons that this just isn't true. To pick a couple:


(1) C doesn't have strong typing. Last time I checked, casting anything to (void*) was still legal - C's static typing is entirely optional. Strong typing is fundamentally important to .NET's security system - the ability to use the new styles of deployment in .NET (where you can get all the benefits of web-style deployment with nothing more than a URL, but also retain the benefits of a rich client-side windows program) would be worthless if the .NET framework wasn't able to verify that the code it is running sticks to the rules. (You may have heard of 'unsafe' code where you can still use pointers. This exists, but it's not allowed in this style of deployment - such code will be rejected by the .NET framework.)


So standard C is not usable for the 'smart client' deployment, one of the key features of .NET.


(2) C does not embed type information in executables. Many features of the .NET framework rely on the availability of type information. For example, serialization, remoting, designer integration, XML type mapping.


Of course it's possible to do all of these things by hand in C. But to provide these facilities automatically an API needs access to information about the methods or data structures you are using. C's lack of support for access to type information at runtime means any API providing these facilities would require you to reenter all that information in some format it could access. This doubles the effort (e.g. you have to define methods in IDL as well as in C if you're using Windows RPC), and opens opportunities for inconsistency - if you have two representations, it's possible for them to get out of sync.


There are others, but I'm not trying to provide a comprehensive list - I'm just pointing out that you are mistaken to "think all necessary features are present in C". They are not.



"you can wrap [Win32] in any language, any platform and provide a very high level user funccionality and that's why we have VB, Delphi and current .NET. that make it really easy to write a user app"


Actually I think VB's ease of use is somewhat misunderstood and in some cases overrated.


VB makes some things very easy. It actually makes some other things extremely difficult. The last time I was doing any VB6 development, I ended up writing a few ActiveX controls in C++ because VB wasn't well suited to the task at hand.


The fundamental problem with VB is that it presents its own set of abstractions which are quite different from how Windows really works. These abstractions make life much easier but they also restrict you.


So prior to .NET you had a choice - lots of effort but plenty of power in C or C++, of massively reduced effort in VB but less power.


If you've had similar experience, I could understand why you might think that easier environments are necessarily less powerful. In fact that was pretty much what I felt after using VB6 for a while - I thought that the Win32 API was the only way to get certain things done.


But now that I've been using .NET for the last 3 years, I've realised I was wrong. The limitations were specific to VB6 - it simply chose to present limited abstractions. But of course you could equally choose to present limited abstractions through a C API... The power isn't a function of the language as such.


The power of the facilities provided is in fact independent of the style of API used to present them - the .NET framework shows very convincingly that you can provide an API which is both easy to use and powerful. (And obviously you could provide one that is hard to use and weak...)



"With Longhorn if i want to write plain C code with a simple C compiler i can't, i should stick with an irritating hybrid known as "Managed C++" and COM"


Actually that's Managed C++ OR COM. You wouldn't use both.



"which is a good concept but highly complex to write even a simplest thing"


I think you're just talking about COM here - is that right? But as I said, use of Managed C++ lets you use the WinFX APIs directly from C++. And if you think the WinFX APIs (which are a superset of the .NET Framework) then either you were doing something wrong, or you haven't really tried.



I said: "You're mistaken about how Longhorn will work - it won't be based on the existing USER32, and GDI32 components."


You replied "Did i mentioned that Longhorn would use USER and GDI somewhere???"


I certainly seemed like you did. You said this:


"My own opinion is that the basic modules kernel, user and gdi are an example of a good (commercial) OS implementation. As you can see the kernel of NT systems is excellent and that's because it's written consistently and with a smallest amount of backward compatibility, that's why Microsoft will build even their new Longhorn system on that kernel"


In fact you drew attention to it when I didn't reply to it first time round.


I took this to mean that you thought Longhorn will build on the basic Win32 modules "kernel, user, and gdi", as you called them. (kernel32, user32, and gdi32, is what I presume you are referring to.)


In fact Longhorn does not build on user32 and gdi32 - it replaces them with something else.


Given the flow of the first two sentences, it looked like you were using 'kernel' to describe the core pieces of the OS rather than necessarily just the bits that run in kernel mode. But maybe I misunderstood, and you did actually mean the bits that run in kernel mode, although if that is the case, it's somewhat unclear from the way you wrote it - it makes the second sentence a big non-sequitur.


But if you did simply change subject without warning, from kernel32,user32,gdi32 in the first sentence to the kernel mode parts in the second sentence, I still can't fully agree. Remember that large amounts of the UI handling in Windows are in the kernel. These parts of the kernel are being replaced in Longhorn.


It's true that other things like handle management and IO will remain largely unaltered. The scheduler is being modified to support certain multimedia features not possible on today's versions of windows, but it is an incremental change on the existing scheduler. But so what? You don't program against the kernel mode APIs for anything other than device drivers.


C is an appropriate language for writing an operating system in. (As is C++.) But that doesn't mean it's an appropriate language for everything - far from it. The whole point of an OS is to abstract away the raw grungy details of hardware handling, memory management, and IO. Seen in that light, it seems like madness to use the same language in user mode as in kernel mode. That's akin to using a hammer for every job!



"I speak about very simple programs written in .NET, they could not be compared in any way with Outlook, they just show the performance overhead of .NET."


But simple programs take just 10-20MB. (And that's before you do any optimization.) That's also the whole working set, and remember that some of that will be .NET Framework DLLs which get mapped between processes. And in my experience they run just fine.


If you're going to get obsessed with memory, where do you stop with reducing the memory footprint? Win32 isn't all that lightweight by the standards of 10 years ago, so these arguments you raise of .NET vs Win32 all apply to Win32 vs. Win16! Back then consumer PCs had to run something lighter than Win32. Now, the fact that it takes several megabytes to run hello world in Windows is not a big deal because a PC ships with hundreds of megabytes as standard, rather than the few megabytes that were standard a decade ago.



"Will my productivity be boosted? By the possibility of rotate windows around???"


That's a pretty specious argument. Just because you can do frivolous things doesn't mean the whole system is frivolous. Picking an arbitrary feature and poking fun at it while ignoring the rest isn't a robust style of critique.


After all, Windows 2000 supports transparent windows, and some would argue that those are just so much chrome, but you're happy with Windows 2000's performance.


The key is whether your productivity is enhanced by the platform. Will spinning UIs help you? Probably not (unless you're building presentation applications). But will the ability to get a job done in far fewer lines of code improve your productivity? Will your productivity be improved by working on a platform which detects whole classes of errors for you and reports them, rather than just plodding on after an error has occured until a crash happens some time later like C does if you mess up your pointers?


I don't know anyone who's tried a non-trivial project with .NET who didn't find it to offer substantial productivity enhancements. I know lots of people who have tried it successfully. (And none of them had much use for spinning windows around.)



"Or i would be writing a document and making a spreadsheet in a 3D space????"


This is a common misunderstanding about what's meant by the so-called '3D' nature of the Avalon UI. It's not really a 3D UI at all, it simply exploits the 3d acceleration hardware. The UIs are still 2D. Unless you use DirectX. (Just like with Win32 today in fact! But faster because the system exploits the fastest parts of the GPU for both 2D and 3D work, unlike Win32 today.) See my article on Avalon graphics in this series or on MSDN for more details. (Right now Google finds these as the first two hits searching for: Avalon Graphics.)



"I even disagree with the concepts of the new UI design, it's irritating."


Microsoft haven't actually released Aero, the new UI for Longhorn yet so I'm not quite sure what exactly it is you're disagreeing with! Any screenshots you're likely to have seen for Longhorn are all for the developer preview, which contained a slightly odd hybrid UI - mostly a modified Windows XP ui with a few experimental features added.



"The UI of Windows (and Mac, and OS2,etc.) is a great success because 1.It is consistent."


Actually I disagree with that, but that's a whole other topic. There are multiple different standards for keyboard shortcuts, and even for cut and paste! There is inconsistency across windows applications on things like where configuration settings go on menus, how to navigate around multi-window UIs and so on...


The Mac seems even more inconsistent when it comes to keyboard navigation through text - every text editing widget seems to have its own minor variations on which keyboard modifiers do what.



"Are you sure that the .NET you see today would be the primary API for the next 10 years? "


Yes - it's very clear that this is the message Microsoft are sending out to everyone: .NET is where new features are going to be added. The only thing that would stop it would be pushback from the developer community. But the reaction from developers has mostly been very positive - you are very much a minority voice.



"Another "paradigms" will raise and .NET will be another "legacy API","


Sure. Eventually. Just like every other API before it. But Microsoft don't push out new APIs for fun - it's expensive and difficult.


On past experience, APIs tend to last a little over 10 years before they start to look outmoded by the evolving capabilities of the hardware. Win16 was around for approximately that long. Win32 has been around for roughly that long. So yes, I'm sure .NET is likely to have been relegated to 'old stuff' by 2020. But I don't agree with your reasoning:


"it's all about business, not about technology or preferences."


I think it is about technology and business. Win32 was too heavyweight for 1980s PC technology. .NET was too heavyweight for the average 1990s PC. Typical PCs in 2010-2020 will likely make it possible for the OS to provide a range of facilities on our behalf that make .NET look cumbersome, and Win32 positively stone-age.


Of course it's about business too - new technologies are adopted because they offer a business benefit - improved return on development spending in this case. But the business benefit is enabled by a technological advance.



"It is really bad that all new funccionality will be provided to C programmers through COM. "


Fortunately, hardly anyone is using C any more. So for most people, the move to an OO API is a big step forward. It means the API uses the same level of abstraction that they've been using in their own programs for years. This extra work is the price you will pay for using C.


But then as a C developer you're already used to working far harder to get something done than a .NET developer. ;-)



"the new Longhorn API would restrict freedom of programming, not utilize existing programmer's knowledge"


You could apply the same logic to any change of technology. But the only reason people adopt these new technologies is because they believe they offer benefits.


Microsoft really isn't in a position to force the use of a new API - if the developer community had simply rejected .NET, and carried on using Win32, Microsoft would have had no choice but to carry on making Win32 the primary API. After all, while Microsoft are effectively in charge of what OS gets installed on PCs because of their monopoly on the client OS, they are *not* in control of how people write Windows applications. They're free to introduce new APIs, but they can't make developers use them. Indeed there are plenty of APIs introduced by Microsoft over the years that didn't achieve widespread developer acceptance.


Nobody forced any organization to use .NET. Companies are using .NET because they saw that it offered an advantage over using Win32.


1 to 1 of 1
  1. .NET, Windows API and C
    2004-04-03 19:09:32  anatolk [View]

1 to 1 of 1