This is a work in progress. It will likely become a book at some point.
What is Programming About?
Originally, computers were people hired by the government or other large business. They would do large numbers of potentially complicated computations by hand and file the results back and forth. “Programming” back then was figuring out how all of the work for a computation should be doled out to individuals or groups and how results from one group should be given to another group for further computation. Programming is mostly an operations problem involving the management of dependencies between parts of the problems and the communication of the results to other parts of the computation.
As technology evolved, there were several factors which converged to create computing as it is today. Logicians and mathematicians did a lot of leg work to develop the core concepts of representing ideas as numbers. People working in business communications wanted to make a lot of their work more efficient, so they developed techniques to use technology to transfer information quickly. People were attempting to solve large problems which couldn’t be solved by people within the time constraints allowed by war. This lead to a lot of developments which all converged under the pressure of war to create computers. Once the transistor was made and integrated circuits were perfected, the cost of computing went down drastically. Large businesses became locked into using computers because of the reduced cost. The cost was reduced because the computations could be done faster and fewer people (remember the “computers” of the past?) needed to be employed. Additionally, fewer tools needed to be purchased (although semiconductor companies were quite fond of pushing new tools every two years). As computers were mass produced and adopted at scale, the cost of the computers went down. Today, it is very difficult to find electronic devices which do not have at least a microcontroller in their circuits.
Three Dimensions of Computation
This means we can dissect programming and computing into three main dimensions:
These three dimensions have been used since the time of Sumeria, where they quite litterally tabulated information about business into clay tablets. They accounted for much of their government and business transactions. Today, the largest application of computing is still accounting. Most computers sit around doing nothing besides adding transactions, points, views, clicks, and coordinates.
Despite thinking of Facebook as a communication platform, they are really an accounting and advertising platform. Facebook is quite simply a Sumerian pipe dream of recording and accounting as much as possible about our lives and interactions with businesses. Facebook tabulates each of the likes (and recently, other reactions, “love”, “care”, “haha”, “wow”, “sad”, and “angry”) on posts and comments, building up a complete accounting of interactions on their platform. These bits are then used to profile you and match you with targeted advertising, then they are also archived and sent into “cold storage” where the permanance of the bits will never rival those markings on Sumerian clay tablets.
Can you imgaine how awkward Facebook would be if, instead of being a software service, it was an actual Sumerian scribe following you around tabulating every reaction and message? They would have to go through your photo albums and memorize every face and point them out when you came across that person. They would take down messages and send them between you and your friend’s Sumerian scribes. They would also use the tabulated results of all you likes, reactions, message sentiments, and location to suggest businesses you might possibly want something from. Think about the cost and the overtime pay because of how often we’re on our phones using Facebook!
Or can you imagine what it would be like to have to hire secretaries for everyone to organize and send messages like Slack or other similar chat applications allow us to do? It would sure make more jobs, but it would also make the service very expensive. This is the power of computers to make things cheaper by automating as much of the accounting as possible.
On the other hand, what Facebook does is still expensive. It has many costs from the sheer scale it operates at, some of those costs are ecological. For instance, in 2019, Facebook used 5,140,000 MWh of power. This is roughly 1/10th of the energy used by NYC in the same year. Additionally, it used used 2,731,000 cubic meter of water to cool down its data centers — this is approximately the same water usage as 1.7 NYC popuplations. Since this water is heated, it may increase the heat in whatever body of water it is released into. This is actually really bad for fish in rivers.
Echoes in Games and Art
Many of the games we’ve played through the ages are based on a type of rule based accounting. Eliminate pieces off a board by moving in certain ways (Chess and Checkers). Lay down a certain pattern of cards, which according to the rules, tabulate to a higher score (card games). Roll a higher number with the dice (most games of chance). Form the longest words with the most infrequent letters to get the highest score (Scrabble). Form the largest patches of area using black or white stones, with a lot of other rules dictating the protocol of the game (Go). Even if we were to come up with games of emotions, we might unconciously adapt the rules of golf to minimize the number of negative reactions one garners before rewarding people with the emotional response they want (we do have these games: comedy, storytelling, music, etc.).
Many of the things we care about in our many cultures are all based on some form of math and logical thinking. In the games laid out above, we can see the three main areas of arithmetic, logic, and communication embodied. In a card game one has to add up the values of the cards, logically ensure they’ve followed the rules, and perhaps deduce the likelihood of other players having certain cards; the communication is embedded in the visibility of the cards and people’s facial expressions. In cooking we use a rough arithmetic of proportions, a logic of how and when the ingredients are added together, and communication of how to propagate the knowledge of those foods. The same could be said for music, weaving, and multitudes of other things.
Summary of Programming
To reiterate, programming is the art and science of managing the transfer of information over a vastly complex communications network between the peripherals, many forms of memory, and processing centers within each processor. Most data centers and large distributed systems have the complexity of galaxies of communications systems all tied together; and within those galaxies, multiple worlds of processors are all tied together. With all of today’s web applications we have, the complexity of the communications between the various components our distributed systems has become even more complex. In some ways, it’s a wonder we’ve even bothered building these complex behemoths. In other ways, it’s the obvious outcome of a culture bent on the “scientific management” method of maximizing profit and poductivity while minimizing cost through planning, scheduling, and standardization.
The Four Necessary Bases for the Emergence of Computers
The necessary developments which lead to computers and programming mostly fell along four divisions:
Business and Military Management
We started counting and started realizing the benefit of keeping symbols around as a proxy for the counted. Then we started managing the way things were built. Then we started managing how people worked. As a result of the pressures and converging technologies programming became the art of managing the computers. Nowdays, we even have arbitrary levels of managers to manage the computer managers (programmers).
This developed hand in hand with the business and military espects, but is interesting for the independence pure mathmatics obtained. At the core of programming today’s technological marvels is the concept of a symbol standing in for another symbol. Gödel, Church, Turing, and Hopper all pushed this a level further each. Gödel came up with a system of encoding logic using arithmetic and later a general system of recursive functions. Church built upon this creating a lambda calculus of computation. Turing built upon it creating a non-physical mechanical system that encoded logic and values in an infinite memory. Hopper realized that this could be extended to create a higher level language of certain values encoding for other values specific to the machine they were meant to operate on. This same principle became extended more and more until we could use math to display video mapped on a Cartesian coordinate system which was then mapped again back into a linear address space. This sort of though was extended until we could encode audio as binary digits. Then, in the 1980s, we discovered methods of compression which were necessary to make multimedia computing and the Internet work gicen the limited resources we had attained at the time.
The communications basis is separate from the physical basis because it evolved separately, until the concepts were merged together in the 1960s. The communications basis started with the telegraph, then teleprinting, then the telephone. As the telephone developed automatic ways of routing calls, the necessary concepts of digital addressing were developed.
The physical basis of computing is intriguing, because we were building similar, yet limited, physical mechanisms since proabbly around 300BCE. While Babbage had an amazing idea with the analytical engine, the design was never finalized and correspondingly never built. It wasn’t until the electromechanical developments of the communications basis that we could transfer information to distances needed to fully implement computers which were powerful enough to be general purpose devices.
Business and Military Management Basis
- 5000 BCE — Sumerians start recording inventories, loans, taxes, business transactions, or even complaints. For a longer overview of logistics through the ages see Tepić, J., Tanackov, I., & Stojić, G. (2011). Ancient logistics–historical timeline and etymology. Tehnički vjesnik, 18(3), 379-384.
- 4000 BCE — Egyptians use various methods of planning and organizing labor. There is evidence of this also happening near the coast of Peru (irrigation canals of Zaña Valley, the Aspero complex in Peru). Unfortunately, what happened with in the central Andes of Peru is typically never figured into the history because of how imperial conquests typically go.
- 1721 — John Lombe’s water powered silk mill.
- 1765 — Europe starts using interchangeable parts again. This was a concept used before 200BCE, but it never really seems to have taken off except in times of war.
- 1766 — Matthew Bolton’s Soho Manufactory.
- 1776 — The Wealth of Nations lays out how division of labor is responsible for economic growth. This is most definitely not the first dicussion of division of labor, as it has been discussed in multiple locales since at least 1100 BC. This kind of thinking of single purpose laborers leads to thoughts about floor plan optimization, communication between distant parts of a business, and the shape of the human “computers” and their work follows from this thought. Even the layout of modern processors also follows from this thought.
- 1795 — Eli Terry Sr., my 3rd cousin 5x removed, invents a milling machine and begins making interchangeable parts for clocks along with an assembly line. He is resposible for making clocks affordable to the avarage person. As a side note, he was an anti-slavery abolitionist and his house had a secret room for runaway slaves.
- 1814 — Francis Cabot Lowell created the Boston Manufacturing Company which was the first vertically integrated manufacturing plant taking in raw materials and outputing final products. This became the Lowell System.
- 1830 — Antoine-Henri Jomini coined the French word logistique (rising from the French logis meaning lodgings) in his Précis de l’Art de la Guerre (Summary of the Art of War). His definition was “… the art of well-ordering the functionings of an army, of well combining the order of troops in columns, the times of their departure, their itinerary, the means of communication necessary to assure their arrival at a named point …".
- 1869 — U.S. Intercontinental Railroad completed.
- 1883 — Standard time zones are adopted by the railroads in order to synchronize schedules.
- 1904 — Harrington Emerson’s “betterment work” begins at Santa Fe Railway. Bonuses were introduced as positive feedback for good performance. Time studies were and an effort was made to ensure normal people would be used as to not set too high a bar for performance. Tools were standardized so that anyone could pick up a task and have the right tools. Improved methods of cost accounting were introduced. All work and tasks were assigned through a central board that displayed the status and assignments for everyone and everything.
- 1910-1915 — Henry Gantt invents the Gantt Chart. This is used in planning things during WWI.
- 1911 — Frederick Winslow Taylor published The Principles of Scienctific Management. His disciple, Charles Edward Knoeppel, also published Maximum production in machine-shop and foundry.
- 1912 — Lillian + Frank Gilbreth author A Primer of Scientific Management.
- 1914 — Lillian Gilbreth’s thesis is published: The Psychology of Management: the Function of the Mind in Determining, Teaching and Installing Methods of Least Waste.
- 1915 — Knoeppel publishes Installing efficiency methods.
- 1917-1919 — Knoeppel Publishes six volumes under the title Organization and administration.
- 1920s — Mary Parker Follett’s understanding of lateral processes within hierarchical orgs led to a matrix organization at DuPont. Charles Bedaux built on the work of Frederick Winslow Taylor and Charles Edward Knoeppel.
- 1921 — Lillian + Frank Gilbreth introduce flow process charts in “Process Charts, First Steps in Finding the One Best Way to Do Work”.
- 1922 — Leon P. Alford starts pushing systematic management. Later he works with Alexander Hamilton Church and some of there ideas were extensions of Charles Babbage’s thought.
- 1924 — Walter A. Shrewhart at Bell Labs invents the control chart. That same year or the next, W. Edwards Deming learns of this new method and takes it with him at the United States Department of Agriculture, United States Census Bureau, and eventually became a statistical consultant to the Supreme Commander for the Allied Powers during WWII.
- 1931 — Wassily Leontief moves from the Institute for the World Economy to the US National Bureau of Economic Research and during WWII served as a consultant to the US Office of Strategic Services. He developed Input-Output analysis, for which he earned the Nobel Prize in Economics.
- 1937 — Operations Research became an embryonic field as Great Britain geared up for WWII.
- 1939 — Leonid Kantorovich intruduces the concepts behind linear programming. Kantorovich, L.V. (1939). “Mathematical Methods of Organizing and Planning Production”. Management Science. 6 (4): 366–422. doi:10.1287/mnsc.6.4.366. JSTOR 2627082.
- 1949 — Wassily Leontief uses the Harvard Mark II to model the US economy based on various sectors of industry.
- 2700-2300 BCE — Sumerian abacus. They used a base 60 number system.
- 2000 — Babylonians and Egyptians have theorems about the sides of triangles (trigonometry).
- 300 BCE — Babylonians use one of their punctuation symbols as a zero-like placeholder.
- 200-100 BCE — Brahmi numerals are invented, becoming the basis of Indian and Hindu-Arabic numerals — they do not include zero. Pingala uses a concept of void, śūnya, in his writings on prosody as a placeholder for zero essentially creating a big-endian binary system. Zero as a plaeholder isn’t repeated until the Lokavibhāga in 458 CE. This same century, Hipparchus is using a placeholder for zero and making tables of chords, which actually means bowstring in the Greek, and these are the precursors to trigonometric functions.
- 40 BCE — Andronicus of Rhodes compiles Aristotle’s works on logic into six volumes of the Organon. Interestingly, Aristotle would call logic “analytics”.
- 36 BCE — Maya-Lenca associated civilizations already have zero as part of the long count calendar. They used a base 20 number system, and a few remaining cultures still use it. The concept spreads through the adjacent regions, but as far as we know it never spreads beyond the so-called “Americas”. The Tawantinsuyu (Inca) had an encoding for zero in their khipu by there being a missing knot in the corresponding position. It would be amazing to see what kinds of computation could have evolved out of this system, but it was cut short by colonization. Perhaps one day Indigenous people will continue developing our indigenous systems. They also had various mechanisms for calculation, see: Maya number system, Yupana, and Nepōhualtzintzin.
- 200 — Diophantus of Alexandria.
- 350-505 — Surya Siddhanta using similar methods to Hipparchus, the math of the heavens (jyotisha) are laid out in terms of the jya (bow-string).
- 499 — Aryabhatiya introduces sine, cos, and inverse sine (jya, koti-jya, utkrama-jya) as half angle half chord versions of those found in Surya Siddhanta.
- ~800 — Virasena calculate base 2,3, & 4 logarithms.
- 820 — Al-Khwārizmī writes Al-Kitāb al-mukhtaṣar fī ḥisāb al-jabr wal-muqābala, or The Compendious Book on Calculation by Completion and Balancing.
- 1126 — Adelard of Bath translates Al-Khwārizmī’s works on Indian numbers as Dixit Algorizmi.
- 1145 — Robert of Chester translates Al-Khwārizmī’s book of “algebra” into Latin as Liber algebrae et almucabala.
- 1150 — Bhāskara II writes the Siddhānta Śiromani which lays out methods of trigonometry and calculus, but predated Newton and Leibniz.
- 1202 — Leonardo Fibonacci publishes Liber Abaci. It also included a well known sequence of numbers from the Arabic world which became known as the Fibonacci sequence. The sequence was known as early as 200 BC in Indian in the works of Pingala.
- 1530 — Yuktibhāṣā was a largely ignored treatise on math and astronomy in the Malayalam language. It laid out the methods of calculus, but predated Newton and Leibniz.
- 1614 — Napier lays out his idea for logarithms in Mirifici logarithmorum canonis descriptio.
- 1637 — Both René Descartes + Pierre de Fermat develop methods of analytic geometry which extends the graphical methods laid out by Omar Khayyám, 11th centrury Persian mathematician, and likely Menaechmus, Greek mathematician 380-320 BCE.
- 1654 — Blaise Pascal + Pierre de Fermat founds the mathematical theory of probabilities.
- 1735 — Leohard Euler lays the foundations of graph theory
- 1799 — Carl Friedrich Gauss proves the fundamental theorem of algebra.
- 1843 — John Stuart Mill publishes A System of Logic, Ratiocinative and Inductive.
- 1847 — George Boole introduces an immature version of his system of logic.
- 1854 — George Boole publishes An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities.
- 1874 — Georg Cantor starts laying the foundation of Set Theory.
- 1889 — Giuseppe Peano publishes “Arithmetices principia: nova methodo exposito” which lays out the Peano Axioms.
- 1891 — Cantor lays out his diagonal argument
- 1928 — David Hilbert reformulates his earlier problem of proving the consistency of the Peano Axioms into three parts: 1) Is mathematics complete? 2) Is mathematics consistent? 3) Is mathematics decidable? (The Entscheidungsproblem)
- 1930 — Kurt Gödel announces a proof relating to the completeness question.
- 1931 — Kurt Gödel publishes “On Formally Undecidable Propositions of Principia Mathematica and Related Systems I”
- 1933 — Kurt Gödel + Jacques Herbrand create the general recursive functions.
- 1935 — Alonzo Church publishes “An Unsolvable Problem of Elementary Number Theory.”
- 1936 — Alonzo Church publishes “A Note on the Entscheidungsproblem.” Emil Post publishes “Finite Combinatory Processes. Formulation I.”
- 1937 — Alan Turing publishes “On Computable Numbers With an Application to the Entscheidungsproblem”. Claude Shannon publishes his masters thesis, “A Symbolic Analysis of Relay and Switching Circuits”
- 1939 — Alan Turing publishes his PhD thesis, “Systems of Logic Based on Ordinals.”
- 1952 — Grace Hopper creates an operational link-loader (called a compiler back then).
- 1955-1959 — Grace Hopper + her team developed the FLOW-MATIC language.
- 1957 — First Fortran compiler by John Backus.
- 1958 — Cobol first developed by Committee on Data Systems Languages (CODASYL). Delagates included Mary K. Hawes, Grace Hopper, Jean Sammet, and Saul Gorn.
- Also see Wikipedia’s Timeline of Mathematics.
- Also see Wikipedia’s Timeline of Programming Languages.
- 1826-1830 — Joseph Henry does the leg work which leads to the electromechanical relay and the telegraph.
- 1831-1833 — Carl Friedrich Gauss + Wilhelm Eduard Weber work on electromagnetic theory and build a telegraph in 1833. Two units of magnetic flux are named after them.
- 1832 — Pavel Schilling demonstrates his needle telegraph. That same year Charles Wheatstone lectures on this same idea telling people that it was already technically possible to build a telegraph. Samuel Morse meets Charles Thomas Jackson and the two discuss Jackson’s electromagnet.
- 1836 — William Cooke takes interest in the telegraph in Europe.
- 1837 — Cooke + Wheatstone demostrate a telegraph over a distance of 1.5 mi.
- 1838 — Morse, Gale (a personal friend of Joseph Henry), + Alfred Vail develop Morse code and build a telegraph using electromechanical relays to increase the distance served to 10 mi. There was a public demonstration at Speedwell Ironworks in New Jersey and they transmitted the message “A patient waiter is no loser”.
- 1840 — Wheatstone patents his “Wheatstone ABC Instrument”.
- 1841 — Alexander Bain creates the first teleprinter, followed Royal Earl House in 1846, + David Edward Hughes in 1855.
- 1844 — Morse demonstrates the message “What hath God wrought.” He sent it 38 miles from Supreme Court Chamber in the US Capitol basement to Mount Claire Station in Blatimore Maryland.
- 1845 — Wheatstone + Cooke register the Electric Telegraph Company.
- 1863 — Edward A. Calahan invents the stock ticker.
- 1864-1866 — Wheatstone advises the Atlantic Telegraph Company’s Atlantic Cables.
- 1870 — Telegraph systems in the UK are placed government control. Émile Baudot invents his 5 bit code. The baud, or the unit of symbol transmission, is named in honor of him.
- 1874 — Western Union’s President William Orton proclaims the telegraph was “the nervous system of commerce”. Alexander Graham Bell meets with John Henry to discuss his idea. Henry encourages Bell and tells him to get the knowledge he needs. Later, Bell meets Thomas A. Watson and the two begin to work together.
- 1876 — “Mr. Watson come here — I want to see you.” Later that year they demonstrated a two-way call over 2.5 mi between Cambridge and Boston.
- 1877 — Bell Telephone Company.
- 1888 — Almon Brown Stroger develops the first commeercially succesful stepping switch, or uniselector.
- 1891 — Stroger patents the the rotary dial. The next years he sets up the first
- 1915 — Bell made the first transcontinental phone call, 3400 miles apart between NYC and San Francisco.
- 1925 — Bell Telephone Labs, Inc. forms from Western Electric + AT&T.
- 1926 — Telex developed in Germany. German Post Office starts telext service in 1933.
- 1928 — Ralph Hartley publishes “Transmission of Information.”
- 1948 — Claude Shannon publishes “A Mathematical Theory of Communication”
- 1949 — Claude Shannon publishes “Communication Theory of Secrecy Systems.”
- 1951 — Claude Shannon published “Prediction and Entropy of Printed English.”
- 200 BCE — Likely time period of the Antikythera mechanism and other similar analog computation devices.
- 850 — The Banū Mūsā borther’s are commissioned by the Caliph of Bagdad to write a book on mechanical devices called Kitab al-Hiyal, or_Book of Ingenious Devices_.
- 1206 — Ismail al-Jazari writes a book Kitab fi ma’rifat al-hiyal al-handasiya, or Knowledgebook of Engineering Tricks.
- 1330 — Richard Wallingford builds the Saint Albans Clock. Then more clocks start to show up. This is merely 128 years after Fibonacci reintroduced mathematics from the Arab world. How odd would it be if those mathematic pricinples were the only ideas which had diffused from the Arab world into the Latin and English speaking world?
- 1642 — Blaise Pascal creates a mechanical calculator called the Pascaline.
- 1672-1694 — Inspired by Pascal, Gottfried Wilhem Leibniz constructs a stepped reckoner, but the gearwork couldn’t be fabricated properly.
- 1725 — Basile Bouchon partially automates part of weaving by using perforated paper tape.
- 1804 — Joseph Marie Jacquard fully automated weaving with punched cards.
- 1820 — Thomas de Colmar patents the Arithmomètre.
- 1822 — Charles Babbage creates the difference engine.
- 1827 — Georg Ohm develops his law relating current, voltage, and resistance.
- 1833 — Ada Lovelace meets Charles Babbage.
- 1843 — Ada Lovelace translates Luigi Frederico Menabrea’s description of Charles Babage’s presentation, but she adds extensive annotations and an appendix with what is arguably the first algorithm for calculating the Bernoulli Numbers. The program was essentially written in machine language. As for the engine itself, it was never fully built. Only its predecessor, the difference engine was fully built in 1990.
- 1854 — Gustav Kirchoff generalizes Ohm’s law.
- 1872 — Sir William Thompson makes a tide-predicting machine using pulleys and wires.
- 1884 — Herman Hollerith uses punched cards to assist tabulation. His company merged with others and became IBM in 1924.
- 1927 — Working differential analyzer made by Vanavar Bush.
- 1934 — Tommy Flowers creates a workable test of a vacuum tube computer.
- 1937 — Alan Turing builds an eletromechanical relay based digital “computer” while working on his thesis at Princeton (the relay is borrowed from the communications industry).
- 1944 — Havard Mark I.
- 1945-1947 — Alan Turing works on the Automated Computing Engine.
- 1945 — ENIAC
- 1947 — The first point contact transistor is demonstrated by John Bardeen, Walter Brattain, and William Shockley at Bell Labs.
- 1959 — This first MOSFET was created by Mohamed Atalla + Dawon Kahng.
- 1960 — The first Integrated Circuits were being created. In the following years, the Apollo program would be one of the largest consumer of ICs.
- 1965 — Gordon Moore observes that the transistor density of ICs was doubling every two years ([Moore’s
- 1971 — Intel produces the 4-bit 4004 microprocessor.
The Evolution of Programming
In the beginning, programming a machine was task full of tedium. One had to:
- Be intimately familiar with the hardware being used.
- Analyze the scope of the computation.
- Divide the computation into stages which can be performed on the hardware.
- Plant out the logic and arithmetic of each task and how memory would be utilized.
- Translate that logic, arithmetic, and storage instructions into machine code.
- Input that code into the machine using an interface specific to each machine (wires, switches, punch cards, etc.).
- Read out and record the output of the computation.
Later, assemblers were invented that let people use architecture specific mnemonics called assembly language instead of binary, octal, or hexadecimal. In time, people like Grace Hopper came along and — in addition to removing physical bugs from the hardware — they created higher level languages. These were unique in several ways. One, they didn’t require complete knowledge of the hardware. Two, while they still required some memory management, they could plan out memory address offsets and register usage automatically. Three, you could write a program on one machine architecture and it was likely that if the language became avalable on another machine architecture, it would still compile.
The next big evolution in programming was dynamic memory management. This meant a programmer no longer had to specifically plan out memory usage (in theory), but the language itself would plan out how and when to allocate and release memory. There was typically a reference counter for variables and when the reference count reached zero the memory would be released. This evolved into more advanced forms of garbage collection. Unfortunately, there were some unfortunate side effects. Sometimes people could write code which cause a rerference to stick around beyond its actual use, this lead to memory leaks. Also, in certain cases the garbage collector would cause some pretty awful hickups when it decided it was time to clean up during a critical moment. There are work-arounds, but they typically mean hacking the language into letting one plan out memory allocation manually.
There are a lot of patterns which have emerged over the decades, and there are many books written about them. A modicum of the patterns have libraries or frameworks which support them. Many of them don’t, which means that programmers must build their own libraries or frameworks for many patterns.
By default, many programming languages support a text based environment because of the historical backwards compatiblility of only having a text based interface. When programs entered the world of visual interfaces, all of the graphics had to be drawn by the program, there was no library for visual interfaces. A program’s graphics engine wrote directly to the video buffer (or virtual buffer, which was synchronously copied to the video buffer at the refresh rate to minimize visual artifacts). As graphical shells and operating systems like Windows, OS/2, and Mac OS became popular, programs would use a kind of object oriented approach to specify every aspect of the user interface in code. Every visual element on the screen would have multiple objects associated with them. Many properties had to be filled in programmatically in an imperative style. Messages would be dispatched to worker threads through a central dispatcher (a WindowProc in Windows). If a program needed a layout to resize as the window changed, all of the code for that behavior had to be manually written. Much code with different concerns was all grouped together around the associated objects. This started changing when Sun Microsystems released Java with its Abstract Windowing Toolkit and became easier with Swing. Along with those layout engines, the standards starting to brew around HTML + CSS started to make declarative UIs very appealing. many of those declarative approaches allow separating concerns around prsentation, content, and certain animations. Today, many application UI frameworks include an option for declaratively defining a UI layout because it has be come a best practice, even if it doesn’t go far enough as an organizing principle.
Programming is Stuck
Fundamentally, nothing about programming interfaces has changed since 1997. Data exchange (or data binding, if you prefer) remains an issue; though it seems like a few patterns are being settled on, it is really a rehash of similar concepts from the 90’s. Organizing code around UI and UX is still an issue. Making sure code is organized around various independent orthogonal issues seems to be a challenge. Most languages still place the burden on the programmer to manage where data is and how it is transferred around the system. Certain patterns have made many of these issues easier to solve. However, we still keep repeating ourselves over and over and reinventing the same wheel someone else invented in the 70s through 90s. Sometimes we get actual progress, and it’s quite welcome.
We have an acronym to remind us not to do this: DRY, Don’t Repeat Yourself. It is the maxim most programmers live by. It is why we build standard libraries, garbage collectors, frameworks, reusable bits of code, isomorphic coding, and many other things. However, there are certain things that always seem to need to be repeated with a few changes each time we add a new screen or form to an application. Things do not wire themselves into finished products.