It’s been nearly twenty-eight years since the release of Turbo Pascal 6.0 on October 4 1990. Sometimes it feels to me like the evolution of programming languages and development environments has been a down hill slide ever since.
I’m not the only one who feels this way (see numerous Hacker News comments that back me up.) Nevertheless, to say programming languages and tools in the mainstream – not to mention academia – haven’t evolved, that we haven’t improved on 1980s tech is absurd. So why does it feel like we’re still trying to find our way back home to the Turbo Pascal experience?
If you find this notion ridiculous because you thought Pascal was a toy language fit only for teaching, I can only assume you never built anything serious with Turbo Pascal. Other dialects (UCSD for instance) weren’t attractive for serious development and their development environments were comparitavely lacking or even painful to use, so if your impressions from school turned you off to Pascal, just trust me, you missed out in a big way.
My high-water mark for the Pascal experience sits around 1990, but earlier versions were impressive for their time as well: I first used Turbo Pascal 3.0 in 1985 and it completely surpassed any programming experience I’d had up to that point. (Applesoft BASIC, Ti-BASIC, UCSD Pascal on the Apple IIe.) To get a flavor of what it was like, try this online emulator.
In this version (similar to 3.0) you toggle between a code editor and a minimal menu system. You got a full-screen editor integrated with the compiler – you could test with “run” – and the integration between the compiler and editer were amazing for the time: You’d get dropped right on the line where your mistakes were, with an excellent explaination from the compiler. This was revolutionary. I mean, I’m sure SmallTalk had something better three years earlier, but none of us knew about it.
Subsequent versions came with a menu driven IDE not unlike IDEs we have available today: 6.0 provided context aware help and documentation, and integrated debugger and syntax highlighting. And it was fast. Really fast. At the time lots of compilers were terribly slow. And if you’ve built a large C++ project recently, you know build speed is still a problem.
As I see it, a good developer experience meets all of the foloowing points:
- Fast build, test iteration
- Quick resolution of build problems
- Minimize runtime errors / crashes
- Good debugging experience
- Easy deployment and packaging for applications
Turbo Pascal scored high marks on all of the above.
Here are the main strengths of Turbo Pascal, both as a language and IDE. Modern languages and tools have overtaken TP on every point but the combination TP offered is still compelling.
- Fast builds, fast execution
- Good module system with “units”
- Easy to deploy executibles
- Excellent compiler error messages
- Excellent compiler and editor integration
- Built-in debugger
- Built-in help and documentation
- Great standard library
- Strong, statically typed language
- Steered programmers away from memory management problems
- Nice unique language features: Sets and files of records as first class parts of the language
- Fast strings, avoid buffer overflows
- Nested procedures and functions (very effective way to structure code when used right.)
- Automatic bounds checking and good runtime (for the time)
Of course there were some negatives too. I’ll briefly touch on those at the end of the post in my “reality check.”
Expanding on a few of the above points:
Getting quick feedback with TDD doesn’t work so well when your builds take twenty minutes; back in the eighties we could hitt “save” and “run” in our Turbo Pascal editors over and over checking our results faster than you can fire up your Rails test suite today, while making do with 16 MHz, 8Mhz or even slower processors.
While our projects weren’t huge by today’s standards, build speed was impressive: Borland claimed 34,000 lines per second (so your tests would kick off in fractions of a second in most cases.) Incremental builds were supported with “units” (like modules or packages.) So typical code-build-test cycles were very fast indeed.
Probably the closest competitors today in terms of compilation and test speed would be “Go” and “D.” C# does fairly well too.
Strong, Static Types
I like to describe Turbo Pascal as a domain-specific language for writing applications; the type system lets you construct an application with less friction than most languages but meta-programming (even “generics”) aren’t available. So writing libraries to add powerful new data structures isn’t possible, as it is with C++ templates.
Compared to my Turbo Pascal experience, building projects with interpreted languages feels frustrating. C or C++ don’t hold up well either; their type system is relatively weak and build times terriblely slow.
The initial lower barriar to writing code with Python or Ruby, say, feels like a breath of fresh air. Code is less “cluttered” and easier for a novice to read and write while requiring fewer lines of code for similar functionality. Unfortunately to produce production ready code you must put a lot of tests in place – the complexity has to go somewhere. You’re left with a comparitively slow application that doesn’t really bypass the need for a build step even if it’s interpreted; you have to continuously maintain and run the unit tests to ensure a sound build. I’m not saying a Turbo Pascal application didn’t need any tests – though many didn’t have any – but the volume of code needed would be much, much less to do the same job and they would run very fast.
The “Go” language is in actuallity pretty close to the level of sophistication the Turbo Pascal language had, which is to say not extremely advanced, but great compared to C or most other languages in widespread use during the 1980s. Today if you want the really secure feeling of a type-checked program you can do better: Try Scala, Rust, Ada or Haskel.
Good Module System
The “units” of Turbo Pascal were brilliant, and abandoning them for the C way of doing things was a real let-down. Even Ruby gems or Python libraries don’t compare (though we’re setting aside the whole question of managing versions of libraries and package management which Ruby and Python try to deal with.)
You could compose a program with:
uses MyGraphicsLib, MapGenerator, MapViewer; ...... begin main; end.
where the ‘uses’ listed all the required units. A unit consisted of two parts: Interface and Implementation:
unit MapViewer; interface const ... (export any necessary constants) ... type ... (export types if needed , including object types) ... function labelForArea(x:integer;y:integer):string; ... (more) ... implementation function showLabelForArea(x:integer;y:integer):string; begin ... do stuff ... end;
These units were part of the language, not simply a way to include multiple source files into a project. You could compile them separately and linked to at build time. Only parts of the unit actually used by the application were part of the final executable. the implementation details weren’t seen by the main program; it neatly allowed you to structure a large program and allow for even faster builds.
Today Go’s packages along with structs and interfaces are similar in a lot of ways (very good if you set aside the question of pulling in code and managing code from external sources) and many other languages have good module systems, especially Rust’s Cargo. C++ still does not, though modules are on the agenda for C++ 20. Java in practice isn’t as nice though JARS provide something similar to the TPU file from Turbo Pascal (and much better, really.)
TPUs didn’t automatically include versioning (you could informally include version info in your interface or TPU name though.) You didn’t automatically get name-spaces with units as far as I recall; you counted on unique type names to get you through.
Nice Unique Language Features
Turbo Pascal supported more powerful versions of Pascal’s “set” type, typically combined with the ordinal type. Think of these as better C enumerations that you can actually do set operations on. And they provided some of the utility of Ruby’s symbol type too. Once you wrap your head around their use you never want to go back.
Files of records were another nitch but powerful built-in feature. Instead of using text files for everything you could make atype like “customerFile = File of Customer;”.
Here’s a quick example of ordinal types, sets and File types.
type piece = (ship, castle, knight, wagon, village); pieces = set of piece; gamePiece= record x,y:integer; strength:byte; category:pieces; owner:player; end; savedGameFile = File of gamePiece; const movable:pieces = [ship, wagon, knight]; stationary:pieces = [castle, village];
Wow, it’s been a long time. I can barely recall the syntax. Did I get it right?
Nested Functions and Procedures
You can put functions inside of other functions (procedures are just “void” functions in C terms.) Especially when you don’t need objects to manage state but you want to impose some structure and limit scope this is a great feature. Among other things you can have same name functions in many places where the particular implementation has to vary.
Assignment and Equality
One last point: The assignment operator for variables was different from the symbol used to set constant values and type definitions. You used “:=” to assign a variable, and “=” for equality checks and constants and type definitions. No more mistakes between “==” and “=”.
Fast Strings / Great Strings / Terrible Strings
Unlike C-strings which are terrible (null terminated char arrays, which you typically allocate on the heap) Pascal had byte arrays on the stack with the first byte containing the length of the string. This meant checking length was O(1) (and fast, at that) so for some string algorithms you’d get much better performance without extra coding as you’d need in C. And because the string knew it’s length (and had a maximum of 255) you would not get buffer overflows and out of bounds problems unlike C.
But this same design meant wasted memory, because those Pascal strings had their maximum length as part of their type: The default was 255, and you could make your own to save space:
type str20 = string;
On balance this approach is great for avoiding crashes, but bad for applications where you need flexible lengths or really long strings. In those cases you’d typically allocate memory from the heap and work with a pointer to chars.
Modern C++ and Java strings while not without their problems combine the best of both the Pascal approach and C strings, while encapsulating an entire string library in their string classes.
Encourage Memory Safety
By default everything is on the stack; calling conventions were by default pass-by-value. You could pass by reference if needed for performance though. Because of these defaults you’d find yourself allocating heap memory much less often than in C. Memory management was rarely an issue; you’d only reach for pointers when you had to and not before.
It’s completely possible to write terrible complicated unsafe code in Pascal, but the language steers you away from an unsafe style.
Modern C++ still isn’t as safe, though if you follow best practices such as putting all you can on the stack and pass by reference instead of pointers, you’ll mostly do all right. Even Java isn’t as safe feeling as Turbo Pascal: Memory leaks aren’t possible but the dreaded “null pointer exception” is way too easy to hit (the most recent versions of Java should be better though.) I don’t have a strong enough feeling for Go to say for sure, though I think it’s too easy to pass in a nil reference as a function argument.
The biggest negative in hindsight was the platform specific nature of Turbo Pascal; it started out on CP/M and DOS which covered most microcomputers at one time, but UNIX was soon to be a big influence and Turbo Pascal never got ported to UNIX as far as I know. There were Pascal variants (a very nice one on VAX/VMS) but the variety of dialects (and the limitations of many of them) kept Pascal from wide acceptance outside of PC-DOS once CP/M had faded.
Early on – even before the Mac, with the Lisa – Apple used Object Pascal (their own dialect, not Borland’s) to write Macintosh apps. When they switched to the Power PC processors in the early nineties they switched to C and C++ (later Objective -C after NeXT got folded into Apple.) Borland produced an Object Pascal for the Mac as well but in the early nineties focused on developing Delphi on Windows Mac OS was a small market by comparison, Apple seemed to be fading fast at that time.
As nice as Turbo Pascal was (and Delphi after it – really fantastic) you wouldn’t want to develop a lot of modern applications with the language and tooling. A while back I looked at a web framework writtenin Free Pascal using Object Pascal and let’s just say it was verbose in the extreme and pretty nasty looking.
Over the last twelve years or so, modern languages have adopted “generics” as a way to abstract types and algorithm application. C++ has these in the form of templates, Java, Scala and D as well and many others. Go does not, though its built in containers allow any types in them which will take you a long way (lack of generics is a huge controversy in discussions of the Go language.) Pascal has no such facility and appears quite rigid by comparison.
For high level work, as Pascal was intended, today we have reference counted memory management at the least (modern C++, Rust) and garbage collection in most competitors like Go, D, Java, Scala, not to mention the interpreted languages out there. To be fair, Delphi offers the “create” constructor for classes and lets you avoid manually calling “new” and “free”. If you are inclined to use Pascal to develop software these days you should use the Delphi dialect available with Free Pascal.
In the end I have to conclude that the power of Borland’s Turbo Pascal came from the tight integration of the IDE and other tools with the language: You can see the closest thing to it in Microsoft’s Visual Studio and C#. Perhaps not coincidentally, the inventor of Turbo Pascal also designed C#. It’s something like the way Apple can control the user experience on their iPhone, with the downside that the whole ecosystem is a “walled garden.”
Other language teams seem to have learned this lesson: What makes Go as good as it is isn’t so much the language but the complete package of the compiler and it’s tools like the formatter and the fact that the compiler produces a single binary with minimal dependencies making deployment simple. The D language and tools also hold up well. Rust comes with an excellent library manager in the form of Cargo.
Java on the other hand is very much just one piece of a puzzle. When paired with IntelliJ you have a passable development environment. Eclipse, not so much. To be fair Java IDEs try to do so much more than Turbo Pascal: Refactoring; source control integration with the IDE; plug-in support; code completion and more.
It may be that today’s developers suffer from the paradox of choice: We can mix and match tools, libraries and frameworks which is great, but we spend time trying to find the perfect combination. Choice is supposed to be good, but now you are facing a combinatorial explosion and so many things can go wrong depending on what you choose to use and by the time you have everything sorted out and you really like your environment one of your tools or libraries has become obsolete.