Part Two – Windows & Objectification (1987-2001)

So having made the jump to IBM-Compatible PCs now I had to decide what to use to program in. My BBC Master had been so flexible I didn’t know what to get for the new machine. Turbo Pascal was the obvious choice but I really wanted to use C as well.

TopSpeed Modula-2

I found a programming package made by JPI called TopSpeed. Niels Jensen, the chap who initially wrote Turbo Pascal, founded this company and seeded it with most of the original Borland development team. The TopSpeed programming environment had pluggable compiler language modules – there were Pascal, C, C++ and Modula-2. Assembler was also built-in. You could mix and match languages in a project, compile them separately and link them all together. Seamlessly. No muss. No fuss. The UI debugger would step through the source files just as you’d expect. The code was tight and efficient. This was in 1987. It was a superb product.

After some reading I went ahead and got C, C++ and Modula-2. So why not Pascal?

Pascal had been designed primarily as a teaching language. The Modula family was meant to be, in essence, the real-world version of Pascal in terms of its feature set and suitable for implementing systems rather than just learning.

It included the ability to create concurrent co-routines very easily (in essence co-operative multi-tasked subroutines not too dissimilar to “green-threads” as seen on early Java, Python and Ruby VMs). Remember that this was running on DOS and multiprocessor machines were still in the realms of fantasy for microcomputer class machines. It had very close control of identifier scoping with explicit exports from modules and the ability for “opaque exports” whereby only a pointer was exposed to the importing module, all the operations that could operate on that pointer had to be defined in the originating implementation module. This allowed the programmer to create object-based programs, which had encapsulation but lacked polymorphism (though this could be partially achieved) and inheritance (though, again, this could be achieved though delegated composition). Modules could also run code to initialize local objects. It also had all the strong-typing features of Pascal. This meant that it was very simple to build what were essentially object-oriented programs using Modula-2 with objects having their own apparent thread of execution i.e. similar to the Actor model as originated in Simula and that Erlang uses today.

I used the C++ compiler to learn C++ but the Modula-2 compiler was head-and-shoulders above this in the TopSpeed package so apart from exploring the language sufficiently to understand the basics, I didn’t write anything much using it.

In 1991-92, Clarion Software merged with JPI and development on TopSpeed in its current form was ceased to instead focus more on their database product. In my opinion this was a real shame but JPI hadn’t made the transition over to Windows very well at all. It could create Win16-compatible binaries but it was very clunky switching between DOS and Windows.

Microsoft Visual Basic

I’d been using Windows 3.x more and more and was getting frustrated at using  TopSpeed’s DOS product to create Windows programs. I think I was one of the first people in the UK to buy Visual Basic 1.0 when it came out in the summer of 1991.

It was a revelation. I’d written text-based form generators and handlers before but this really was excellent. Obviously the language wasn’t as rich as Modula-2 or even Pascal but it was quite capable of structured record handling.

I still had to use TopSpeed to create Win16 DLLs. Simple ones that just wrapped some of the Windows API functions due to VB’s inability to properly handle structures being returned or passed to an API, or more complex ones where the interpreted performance of VB left something to be desired.

I do believe that if VB 1.0 hadn’t come out, then Windows 3.x would not have been able to gain such a foothold in the personal microcomputer world. A year after VB was released there was a flood of Windows applications available at a reasonable cost. It completely lowered the barrier for entry for developing on the Windows platform. It allowed people who didn’t have the time, inclination (or ability) to get to grips with Windows’ somewhat arcane infrastructure to quickly write useful programs. Essentially it acted to commoditise Windows programming.

I used VB a lot through VB 1.0 – 3.0 (when the JET database engine support was added) but when VB 4.0 came out in 1995 I declined to upgrade at home. Why? Ah, well, in a word…Delphi. More on this later.

I continued using VB 4.0 and upwards to VB 6.0 at work for various projects as it became the de facto language for the majority of our desktop applications. We even created Windows Services using it (with the assistance of some add-ons). People may knock VB as a product, but I believe that VB 6.0 was, and most probably still is, the reason for Microsoft’s dominance in the business desktop world. I don’t love it as a language but I don’t hate it either. I’ve used it, and Visual Basic for Applications (VBA) and VBScript, off-and-on for nearly two decades now so it’s like a comfortable pair of old slippers – a bit worn and scruffy but you just can’t throw them away…

The important thing with VB was that it made it very simple to use the whole Windows ecosystem of registered COM components. I had already realised that it’s not so much the language syntax (though there are extreme cases!) as the library support that really makes for a good programming environment. It doesn’t really matter if a feature isn’t in a language as long as it’s easy to pull it in from a library. If the library is standard across language runtimes then so much the better. The VBX/OCX extensions and the ability to map calls onto the Win16/32 API made for a very powerful and compelling product.


In 1992, as I ditched TopSpeed, I purchased Borland C++ 3.1 and thought it was a fantastic product. It came in both DOS and Win16 formats and was able to develop for Windows 3.0 (Win16) and MS-DOS. Each environment had its own application framework; TurboVision for DOS which provided Windows-like artefacts and OWL (Object Windows Library) which wrapped the MS Windows APIs much more completely than MFC. Both were excellent and had many points of similarity in their APIs meaning that it was possible to port applications from one to another without too much trouble.

Along with VB, I used Borland C++ a lot between 1992-1995, and became very familiar with the language, its libraries and its intricacies.

So, having used it for upwards of 15 years what do I think of C++? I think it’s an excellent language and has been getting a lot of bad press in recent years.

One has to bear in mind that C++ was originally written to essentially be back-compatible with C (and 3rd party C libraries) and the first C++ compilers were really just pre-compilers generating C code. The guys at AT&T did a stupendous job. When I started using C++, I essentially used it as a “type-safe C” and, over the years, I’ve found that a not inconsiderable number of “C++ programmers” never get too far beyond that.

I know people’s main grumble is, compared to Java for instance, having to make sure that you free memory when you’ve finished with it i.e. it’s not garbage collected like later virtual machines languages but C++ was designed to run on machines with much less power than today’s and it just didn’t make sense to have that processing overhead. As I’d programmed in assembler, Pascal, C and Modula-2 each of which required me to manage memory this wasn’t too much of a problem and in C++ there was the concept of a reference-counted auto_ptr to wrap pointers to protect resources. In Borland C++ “CodeGuard” was built-in that, at the end of the run, would provide a report of unreleased memory along with the allocation site so any leaks could be easily found and fixed during development – assuming that the code was exercised to a reasonable extent in testing.

I’ve made a general observation over the years that when a new language comes out it is initially championed by some very capable programmers who then write lots of books and articles on how best to use it. As it gets adopted more broadly by the mainstream, less capable programmers start to use it and they don’t have the time or inclination to read those articles or to learn the product fully. It all starts to go wrong. Language and library features are misunderstood or simply ignored. Best practice goes out the window and people do just what works at the time. The seeds of unrest and mistrust of the language are sown…in the meantime, the evangelists have moved on to the NEXT BIG THING.

I think it’s ironic that languages that were created as C++ was seen to be too complex have gradually added in features to replace those that were taken away. Templates in C++ have essentially become analogous to Generics in Java and .NET (though the latter aren’t as powerful – templates can actually solve some problems at compile-time without even needing execution). Operator overloading was removed from Java but languages like Groovy, JRuby and C# (which is pretty much a Java-dialect) add it right back in because, used properly, it makes a lot of sense. Similarly for multiple inheritance being re-introduced as specialised mixin classes.  I suspect that if someone had come up with a garbage collected C++ back in the  early 1990s then the programming world would have been a different place!

I used Borland C++ right up until 5.02 before switching over to MS Visual Studio 97 to keep in line with standards at work. However, as mentioned before, in 1995 I had found another development product of choice for hobby projects…Delphi.

Borland Delphi (Object Pascal)

As I’ve said before, I like the VB programming environment for its ease of use but it fell short as a language (not so much of a big deal) and came up decidedly short when wanting to do any “systems” type work with the Windows API meaning you either had to fall back to creating toolkit DLLs in C++ which you’d then call from VB, or doing some very hairy low-level bit-twiddling hacks in VB itself. The earlier versions of VB (prior to VB 4.0) also didn’t truly support object-oriented programming instead only allowing GUI Forms to be objects.

In 1995 Delphi gave you it all.

It had the easy-to-use IDE with all the inbuilt goodness that you’d expect from a top-line Borland or Microsoft development product, the language was Object Pascal with the Visual Component Library (VCL) added as an extremely competent layer over the Window API but with the option to dump that and program straight to that API if so desired. The VCL was built on the solid foundation of Borland OWL but made more coherent and better suited to the GUI builder environment.

It included an inbuilt BASM-32 assembler for high-performance code as required which interoperated easily with the Pascal code too. It had database components that made it very easy to put together a two-tier database application (although, as per many of such 4GL products, more efficient programs actually needed coding properly outside of this framework for large-scale production deployments).

It was also great that, for deployment, it was possible to just compile everything into a single, admittedly large, .EXE file without having to deploy loads of support DLLs. Not only was it an XCOPY deployment, it was just a COPY deployment! I used Delphi quite a lot for home projects across Delphi 1-3, and Delphi 6 (2001).

As time went by, additional features appeared in the Delphi language and libraries were improved. The concept of shared packages were introduced to allow suites of programs to share resources easily. Simple web deployment capabilities were added. It was ported to Linux, as Kylix, allowing VCL-like programs to run on that platform. For me, Delphi remained THE development platform until MS VS.NET came out. Even when .NET came out in 2001, Delphi 7 and onwards provided the ability to either target the .NET runtime or native code, with very little change and so provided a perfect migration path for people that wanted to use the capabilities of the new runtime whilst retaining the language they knew.

Unfortunately, during 2005-2006, Borland went through some strategic changes and decided to focus on enterprise software lifecycle tools rather than development tools and spun Delphi (as well as the excellent C++Builder product) off into a separate operating arm, eventually called CodeGear. Whilst they did some good things, like introducing a free version of Turbo Delphi, which lacked some of the enterprise/professional libraries, it also seemed to lack direction. .NET compatibility was dropped, Unicode and 64-bit support never saw the light of day. The changes that MS made with the Vista Aero interface and the .NET 2.0 APIs meant that they were for ever playing catchup to MS .NET development tools.

MS VS.NET was essentially a Delphi-killer, having been designed by the same person, Anders Hejlsberg, and incorporating many of the ideas that made Delphi such a compelling product, but with the money and marketing might of Microsoft behind him. VB.NET, much criticised for not being like VB6, actually had a lot more in common with Delphi’s Object Pascal than with VB6 and Delphi developers could easily make the transition across products.

Codegear was sold off to Embarcadero, a leading database tools vendor, in 2008. Work on Delphi continues, most notably Unicode support having been introduced, and it remains an excellent product, but realistically how long can it hold out against C# and the might of the .NET runtime?

That said, I still use Delphi 6 for some hobby projects, even though the UI it generates isn’t fully conformant to the new Windows look and feel. The native code it generates is around 1.5x-2.0x as fast as the equivalent Java or .NET code, compilation is “instant” and it’s easy to distribute little utilities without requiring large runtimes to be present. I know…just call me a retro Luddite!


When I did my Open University postgraduate course one of the modules was in Object Oriented Analysis and Design. The language used in that module was Smalltalk-80.

This was another of those “eureka” moments in my programming experience.

Although, as mentioned earlier, I’d learnt my OOAD skills informally via Modula-2/C++ and these stood me in good stead, I had never used an interactive programming environment as rich as that of Smalltalk. The idea that all development was done in a persistent image so that objects could be changed actually on the fly, essentially during execution was new to me. Also the dynamic type-free nature of the language itself, again though not entirely new to me, fitted into this kind of environment so well that it was a revelation and changed the way I perceived coding. The reasoning that you could just send a message to an object that you know it can’t handle, catch the traceback in the debugger and then implement it on the fly gave me a real feel for what later became known as Extreme Programming was all about.

I recommend that any developer spends some time experimenting in a free Smalltalk environment, for instance Squeak on Linux or Object Arts Dolphin on Windows, just to get an idea of what real, interactive live development is all about. Take a look at these videos using Dolphin to see what I mean.

I still use Smalltalk for experimenting in OOD approaches to get object responsibilities & interactions clarified. It is the most liberating environment for prototyping out there but simply doesn’t have the market penetration, possibly for the same reason that Forth never made too much of an impact. It’s nice to see that Ruby, that uses many of the features of Smalltalk, and having had its popularity boosted by the pre-eminent Rails framework has re-awakened some interest in Smalltalk in this growing group of developers.


This was another language that I learned as part of my postgraduate course. Again, it was one of those things that made the connections between various concepts that I’d already picked over the years.

I’d had an interest in Expert Systems for few years and actually coded a few knowledge systems at work using SA&E’s KES II in the late 1980s. (We were a Sperry/UniSys computer operations bureau and they were marketing the product and so looking to get some reference implementations – so I got a good deal of help from the vendor!)

From this I already had experience of the type of thing that Prolog was doing but formally learning the concepts forced me to get to grips to the mechanisms of pattern matching, tree-structures and automatic backtracking. It made me think of problems in terms of declaration and definition rather than solutions. Defining the moving parts and their degrees of freedom, rather than the specific movements they must make.

I’ve used the concepts I learnt a lot in my programming, especially recursive backtracking algorithms to problem solving, but haven’t really used Prolog itself much since learning it. For me, it lacked the imperative logic that was sometimes needed that just made solutions a little simpler to understand. Maybe this is why Prolog never achieved really widespread usage.

It looks like Erlang uses very similar unification logic for its method selectors but with some imperative extensions added into the language. It will be interesting to learn that at some time and see how the two compare.


I’ve been using this since the early 1990s and studied relational algebra formally as part of my postgraduate diploma in 1994-1995. What can I say? It’s still probably the best way of dealing with data on an enterprise scale despite the increasing re-emergence of ISAM-like solutions for mega-scale filesystems as provided by Google and Amazon PaaS offerings.


Ah, yes. Java. I did some early work in Java 1.0, in 1995 during my postgraduate course. I’ll talk about it in more detail as I describe the “Internet” years next time!

So that’s the end of the middle-bit. I’ve now covered up to the late 90s and early 2000s and the Internet is starting to achieve much importance. Machines are getting more powerful and interpreted runtimes are becoming feasible for large-scale production systems.

In the next part I’ll talk about scripting languages, .NET and Java, and functional languages.

3 Responses to “Programming Languages – II”

  1. Fab read. Looking forward to the next installment

  2. Varun Oberoi Says:

    Me too looking for next installment.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: