The sequel you’ve all been waiting for! Last time we had a look at what happens before code is put to the stack, this time we plunge into the meat and bones of making games. We’ll look at the fascinating yet far too brief windows we’ve been provided into the game making process through the ages to look at the tools developers use to build characters, levels, systems, and so on. You’ll learn how some of the most legendary companies in the whole business put their works together in the days before standard engines, technologies, or processes.
This blog has brought up Gremlin a number of times, and for good reason. As one of the most diversified companies in the period of the 1970s, they give so much insight into the day’s technologies and business strategies. In 1977 they established a division to put out the Noval computer system, partially based on the Z-80 driven technology they were using to create their arcade games like Blockade. While the Noval failed to become the office workstation they hoped it would be, it served as a pivotal and highly advanced development station for both Gremlin and their later parent Sega. (There exists a Sega brochure which shows the Japanese development group working on Noval systems, though unfortunately I can’t share it at the moment)
One of the artifacts we looked at last time was Bill Blewett’s notebook for Ago Kiss’ Frogs characters. Creative Computing happened to catch Mr. Blewett showing off this tool with the same characters (though different poses) mainly to show off how to create pixel art on the Noval. He used this device called the Bit Pad to draw without needing to enter hex values or even use a cursor interface. Tablets like this basically operated the same as artists tablets do today, if with less precision. For many people, this was the more intuitive way to rough out pixel art. The program Mr. Blewett used shows a full view of the image, an inverted view, and a zoomed in view to the top left corner for fine adjustments. These kinds of programs are the most tangible in envisioning the start of the game creation process, even if hardware is technically first. One frame at a time, a game like Frogs comes to life.
It may be a bit surprising to learn that Atari actually didn’t have very usable development tools for a long time despite being the industry leader. They had access to powerful mainframes like the PDP-11 for a long time, but the methods they were using to create games on both the arcade and consumer ends were largely the same from the mid-70s to the beginning of the 80s. No indication of better user interface tools for people like artists and designers to look at the game before putting it to code. Instead there was a lot more emphasis on working with a programmer and assembling each part meticulously to minimize technical issues. Above you can see the typical process of code revision on paper, to collaborative programming, solo finalization, and the actual hexcode editors they largely used.
This emphasis did mean that Atari had a lot of very powerful coding tools at their fingertips. The gallery here shows off some of the work done for one of the ports of Xevious with the last one showing off file structures to organize each map into it’s own separate entity. While it’s hard to tell how advanced the competition was at this time in terms of these quality of life tools for programmers, I would guess it would be fairly uncommon to organize code so efficiently in the way that Atari’s programmers did. This sort of organization allowed the most to be made of resources on very limited hardware.
Eventually Atari did actually get together some proper visualization tools, such as this artist workstation for designing the tiles in the unreleased game Quak! in 1982 (unrelated to the 1974 light gun game). Not only is the artist provided with a giant tablet which more or less works like a mouse, they also get two massive screens to view their work from. It’s almost more like an animator’s desk than a typical computer artist, and probably was less efficient than they may have hoped for.
Just as Nintendo properly formalized their game conceptualization process, they also created a plethora of in-house tools to meet the meticulous demands of planning sheets they had to work under. The third picture shows a very clear view of the pixel art tool with palletes at the top, all sprites in a certain memory block at the left, the current frame in the center, and English commands peppered along the right side. Takashi Tezuka also demonstrates a sort of scanning system which could take graph paper drawings and get them most of the way to a finished picture. Despite the greatness of tools, sometimes pen and paper is just better.
However, Nintendo did not allows third party developers to use these tools. Many developers for the Famicom and Nintendo would greatly struggle with creating development systems, but Namco actually reverse engineered the Famicom to start creating games on it. The pixel art tool they came up with was far simpler than Nintendo’s, at least by what can be seen in this clip of Masanobu Endo working with it. One advantage it had was being able to view full animations directly in the editor, such at this 16 frame explosion.
Tools for Famicom development would advance significantly over it’s lifespan. Eventually they got mouse driven interfaces, which allowed Shigeru Miyamoto to really get hands on with the computer for the first time. In this documentary footage shot during the creation of Super Mario Bros. 3, Miyamoto can be seen editing a placeholder chunk of a level using his pointer device. In a real-time game state (notice the Mario animation) Miyamoto changes three large blocks in the level into vertical logs without having to recompile anything.
Sega was absolutely no slouch in standard development tools though with their Sega Digitizer technology actually being an internal system which was iterated upon through the years. It was driven by a light pen in the beginning as seen with Pitfall II. It had two screens and allowed for both pixel art and level creation from directly inside the interface. It moved to a tablet design with the second iteration as seen being used to create the large characters in Golden Axe II. The third version can be seen very briefly in motion in this documentary clip, displaying the swiftness and ease with which artists could put fine detail on each individual sprite.
Mega Drive development did not take a back seat either. In this clip shared by Yuji Naka, the development environment of Sega circa 1990 is displayed. Specifically by this time Sega’s programmers are using PC-98 systems and have moved to localized Japanese file structures as shown on screen briefly. Most interestingly, it also shows Naka himself with what he described as Sonic’s collision code on his second monitor. The familiar bent curves are immediately recognizable, though it does seem like there’s still a lot of hand-editing as opposed to free form movement of all elements on the screen via an interface.
The later stage of development in this era is provided by the book Video Games: How It’s Made. A number of unidentified studios are used for the book, though it’s pretty clear that the Jurassic Park related material was done at Blue Sky Studios. Here we can see a Maya-like interface (may even be Autodesk; I just wouldn’t know) used to create the pre-rendered graphics seen in the Jurassic Park Genesis game. It’s all remarkably similar to what we have today, minus some shuffling of visual priorities. Second is a side not often seen, sound being analyzed and balanced in an audio program. It’s unclear what platform this sound is for, but a look into the music creation process for the Sega Genesis can be seen in this documentary clip with Howard Drosson of STI. Finally we have a typical workstation for a console developer on the Super Nintendo, with very mysterious surrounding hardware and an unknown game on the screen.
The halfway point between 2D and 3D can also be observed in the System Shock level editor currently held at the Strong Museum of Play. Tiles are used to create maps from the top-down, not unlike Doom’s files, though have the potential for verticality as seen in the drop down menu. This editor which Doug church created allows the designers to view the levels in several different ways before making final decisions, which was important when each level had it’s own designer. Everything had to work and be understandable, and the icons seem to indicate they wanted to communicate the design language of the game even down to the tools. Not everything is plain text.
Let’s end our sojourn into development tools by looking at the refinement at Capcom around 1996. Not only has the clarity of their development software increased dramatically, but the beginnings of their next step is shown in 3D modeling. Melding the two sides of game development to create textured worlds and other persistent 2D graphical elements were going to remain very important in game development going forwards. The experience with creating game tools would have to be learned all over again, helped by more widespread official support as the many trials by fire as above.
This was a far from comprehensive but I hope insightful look at the first stages of turning concepts into reality. Hopefully we can continue to gather interesting windows into the past and bring development stories to the public, past the secretive corporate walls of projects and companies long gone. We didn’t even get into development software which was released commercially, which may be a subject for a future post!
Keep learning, keep making!