A History Of Robots In The Modern Era
Executive Summary
The modern era of robotics—from Jacquard’s programmable loom through today’s AI-powered autonomous systems—represents a continuous trajectory toward increasingly capable, increasingly autonomous, and increasingly intelligent machines.
What began as mechanical automation executing fixed programs has evolved into adaptive systems that perceive, reason, learn, and operate in unstructured, real-world, environments alongside humans.
Several themes characterize this evolution:
From Programmed to Learned Behavior
Early robots executed fixed programs (Jacquard’s cards, Unimate’s taught sequences); modern robots learn from data, discovering behaviors through experience rather than explicit instruction. This shift—from programming to learning—enables robots to handle complexity and variation that exceed human ability to specify exhaustively.
From Structured to Unstructured Environments
Industrial robots required highly structured environments (parts consistently positioned, no unexpected obstacles); modern autonomous systems navigate messy real-world environments (self-driving cars in traffic, delivery robots on sidewalks, Mars rovers in unknown terrain), handling uncertainty and variation through robust perception and adaptive control.
From Isolation to Collaboration
Early robots worked in cages, isolated from humans for safety; modern collaborative robots (cobots) work alongside humans, sharing workspaces and tasks. This shift requires not just better safety systems but genuine human-robot interaction—robots that understand human actions, predict intentions, communicate clearly, and collaborate naturally.
From Teleoperation to Autonomy
Early space robots were teleoperated with significant human control; modern systems operate autonomously for extended periods, making local decisions without human intervention. This autonomy—driven partly by communication delays (can’t teleoperate Mars rovers with 20-minute delays) and partly by capability (autonomous systems can respond faster than remote humans)—represents robots as genuine agents rather than remote-controlled tools.
From Physical to Cyber-Physical
Modern robots aren’t just mechanical systems but cyber-physical systems integrating computation, sensing, actuation, and networking. AI systems process sensor data, cloud computing provides knowledge and learning, networks enable coordination among robots—making modern robots inseparable from broader information technology ecosystems.
Introduction
Today’s robots—from surgical assistants performing delicate operations to autonomous vehicles navigating city streets—stand as testament to over two centuries of innovation, each breakthrough building upon the discoveries of visionary engineers, inventors, and scientists who dared to imagine a world where machines could think, move, and work alongside humanity.
A History Of Robots In The Modern Era (1800 – Present Day)
This chronicle traces the story of visionary inventors, paradigm-shifting breakthroughs, unexpected failures that taught crucial lessons, and the gradual realization that creating truly intelligent machines requires not just better mechanisms, but fundamentally new approaches to sensing, reasoning, and action. The modern era transformed robots from programmed automata into autonomous agents, and this transformation continues accelerating toward futures our predecessors could have scarcely imagined.
Read the complete history of robots here.
1800s – Hisashige Tanaka: Japan’s Edison and the Karakuri Legacy
Hisashige Tanaka (1799-1881), often called “Japan’s Edison” or “Karakuri Giemon,” represents the culmination of Japan’s indigenous automata tradition (karakuri ningyĹŤ) before the nation’s forced opening to Western technology during the Meiji Restoration. Working throughout the 19th century in relative isolation from European clockwork traditions, Tanaka created extraordinarily sophisticated mechanical automata that demonstrated principles of programmability, feedback control, and autonomous operation using distinctly Japanese engineering approaches.
Tanaka’s most celebrated creations included:
Tea-Serving Automata (Chahakobi ningyĹŤ)
These dolls, perfected by Tanaka though building on earlier designs, could carry a cup of tea across a floor, stop when the cup was lifted by a guest, wait during tea consumption, and return to the starting position when the empty cup was replaced—all without external control during operation. The mechanism achieved this through elegant mechanical feedback: the cup’s weight controlled internal mechanism states through a system of levers and latches. When the cup was full, its weight held a latch in one position, allowing the walking mechanism to operate in “forward” mode. When the cup was lifted, the weight change released the latch, stopping all motion. When the empty cup was replaced (lighter than the full cup), the latch engaged differently, activating “return” mode. This represented genuine feedback control—the system sensed its own state (cup present/absent, full/empty) and modified behavior accordingly, without pre-programmed timing.
Arrow-Firing Automata
Tanaka created mechanical archers that could select arrows from a quiver, nock them on a bowstring, draw the bow with realistic motion mimicking human archers, aim at a target, and release—then repeat the sequence. The mechanism required coordinating multiple independent actions in proper sequence with correct timing, essentially a programmed state machine implemented through cams and linkages. The archer had to maintain sufficient tension in drawing mechanisms to actually propel arrows toward targets, requiring robust construction and precise energy management.
Kanji-Writing Automata
Perhaps most remarkably, Tanaka built dolls that could write Japanese kanji characters—complex ideographs requiring dozens of coordinated brush strokes with proper stroke order, pressure variation, and flourishes. Writing kanji mechanically is extraordinarily difficult because characters aren’t constructed from simple geometric shapes but require flowing, organic strokes with culturally specific aesthetic qualities. Tanaka’s mechanism somehow encoded these qualities through cam arrangements that controlled brush position (X-Y coordinates), pressure (Z-axis), and possibly brush angle, creating calligraphy that Japanese observers found not merely legible but aesthetically satisfying. The programming capacity required to encode multiple complex characters represents mechanical data storage approaching the sophistication of contemporary European music boxes and automata.
Perpetual Clock (Myriad Year Clock)
Tanaka’s masterwork, completed in 1851, was an astronomical clock showing time, day, month, lunar phases, and various Japanese calendar cycles. The device could theoretically run for 10,000 years without adjustment (hence “myriad year”), using compensating mechanisms that corrected for various error sources. The clock demonstrated Tanaka’s understanding of precision engineering, astronomical calculation, and long-term reliability—principles he would later apply when transitioning from automata to industrial machinery.
Tanaka’s significance extends beyond individual creations. After the Meiji Restoration (1868), when Japan rapidly industrialized to compete with Western powers, Tanaka transitioned from artisanal automata-making to industrial manufacturing, founding what would eventually become Toshiba Corporation. His trajectory—from traditional craftsman creating entertainment automata to industrial entrepreneur building telegraphs, electrical equipment, and weapons—mirrors robotics’ broader evolution from curiosity to critical technology. Tanaka demonstrated that the skills, principles, and mindset developed through automata creation—precision mechanics, systematic problem-solving, understanding of feedback and control—transferred directly to industrial automation, establishing a pattern that would repeat throughout robotics history.
1804 – Joseph Marie Jacquard’s Loom: The First Software
Joseph Marie Jacquard (1752-1834), a silk weaver from Lyon, France, created in 1804 a device that would revolutionize not just textile manufacturing but the entire concept of programmable machines: the Jacquard loom, which used sequences of punched cards to control weaving patterns automatically. While building on earlier work by Basile Bouchon, Jean Falcon, and Jacques Vaucanson, Jacquard’s implementation achieved practical success that transformed the textile industry and established the punched card as the dominant information storage medium for the next 150 years.
The Jacquard loom’s revolutionary principle was the complete separation of pattern (information) from mechanism (processor). Traditional looms required weavers to manually select which warp threads to raise for each pass of the shuttle—a skilled, tedious process where pattern complexity directly translated to labor time. Jacquard’s system automated this selection: punched cards moved through the loom sequentially, with each card representing one row of the woven pattern. Holes punched in specific card positions allowed mechanical pins to pass through, engaging hooks that lifted corresponding warp threads; positions without holes blocked pins, leaving those threads down. As the shuttle passed between raised and lowered threads, it wove one row according to the card’s encoded pattern. The next card would present a different pattern of holes, creating the next row. Complex patterns requiring thousands of sequential decisions could thus be woven automatically by preparing an appropriate card sequence.
The implications were profound and extended far beyond textiles:
Programmability
The same loom mechanism could weave infinite patterns by loading different card sequences—the hardware was general-purpose, with behavior determined by software (the cards). This separation is the fundamental architecture of all modern computing: one processor (CPU) executing different programs stored in memory. Jacquard had invented software 140 years before the electronic computer.
Information Storage
Punched cards physically embodied information in a format machines could read. The pattern of holes was the program, encoded in a machine-readable medium. This proved so effective that punched cards dominated data processing through the 1970s—Herman Hollerith used them for the 1890 U.S. Census, IBM built its early computing empire on card-processing equipment, and early electronic computers like ENIAC were programmed through punched cards descended directly from Jacquard’s invention.
Digital Representation
Each card position was binary—hole or no-hole, true or false, 1 or 0. Jacquard had created a digital information system, representing complex analog patterns (visual designs) through discrete digital encoding. This principle—that continuous phenomena can be represented through discrete samples—underlies all digital technology.
Social Disruption
The Jacquard loom triggered violent backlash. Skilled weavers, recognizing that automation threatened their livelihoods, rioted in Lyon and elsewhere, destroying Jacquard looms and occasionally attacking Jacquard himself. This was history’s first large-scale automation-driven labor displacement, presaging debates about technological unemployment that intensified through the Industrial Revolution and continue unabated today with AI and robotics. The Luddite movement in England (1811-1816), where textile workers destroyed mechanized looms, emerged from similar fears and realities—automation eliminated skilled jobs, concentrated wealth in factory owners’ hands, and transformed craftspeople into machine-tenders. These social dimensions of automation have never been resolved; they merely shifted as technology advanced.
Charles Babbage explicitly cited the Jacquard loom as inspiration for his Analytical Engine, recognizing that if weaving patterns could be stored on cards and executed mechanically, so too could mathematical operations. Ada Lovelace, in her notes on the Analytical Engine, famously wrote: “The Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves.” This connection between textile automation and computing would prove prophetic—modern programming still uses textile metaphors (threads, weaving concurrency) descended from Jacquard’s innovation.
The Jacquard loom operated successfully for over a century in industrial textile production, with some upgraded versions still in use today for specialty weaving. It represents the first practical programmable machine, the ancestor of all robots that follow stored instructions, and a vivid demonstration that the principles underlying modern computing—programmability, information storage, digital encoding, hardware-software separation—were discovered not through abstract mathematical investigation but through solving concrete industrial problems. Jacquard showed that automation and programming were inseparable, a unity that defines robotics to this day.
1822-1837 – Charles Babbage’s Engines: Mechanical Computation as Automation
Charles Babbage (1791-1871), English polymath, mathematician, and inventor, designed two revolutionary calculating engines that, though never fully completed in his lifetime, established the conceptual foundations for automatic computation and profoundly influenced robotics by demonstrating that complex intellectual tasks—calculation, decision-making, even what we now call programming—could be mechanized.
1822 – Difference Engine No. 1
Babbage’s first design, begun in 1822 with government funding, aimed to automatically calculate and print mathematical tables (logarithms, trigonometric functions, navigational tables) whose manual calculation was error-prone, tedious, and critically important for navigation, engineering, and science. The Difference Engine used the “method of finite differences,” a mathematical technique that reduces polynomial calculations to simple addition—perfect for mechanical implementation since addition is mechanically simpler than multiplication or division.
The Engine consisted of vertical shafts bearing digit wheels (each representing one decimal digit 0-9), arranged in columns representing decimal places. Turning a crank advanced the mechanism through one calculation cycle: the machine would add the values in one column to another, propagate carries between columns when sums exceeded nine, and settle to the new result—all through pure mechanical action of gears, levers, and ratchets. Repeating this cycle produced successive values of the polynomial function being calculated, which could be automatically printed, eliminating human transcription errors.
Babbage’s partial demonstration model (about 1/7 of the full design) worked perfectly, calculating values reliably. However, the full Engine required unprecedented mechanical precision—thousands of components machined to tolerances that challenged contemporary manufacturing capabilities. Cost overruns, personality conflicts with the government engineer assigned to supervise construction, and Babbage’s tendency to redesign components repeatedly (seeking perfection) led to project cancellation in 1832 before completion. A working Difference Engine No. 1 was finally constructed by London’s Science Museum in 1991 using Babbage’s drawings and period-appropriate materials and techniques—it worked flawlessly, vindicating Babbage’s design and demonstrating that the technology existed in his era to build his machine; what was lacking was project management and sustained funding.
The Difference Engine demonstrated automatic calculation—the machine, once set up with initial values and cranked, performed calculations without human intervention or decision-making during operation. It was a specialized calculator, not a programmable computer, but it showed that intellectual labor (calculation) could be mechanized just as physical labor had been mechanized in factories. This conceptual leap—that thinking could be automated—proved foundational to both computing and robotics.
1837 – Analytical Engine
Moving beyond the Difference Engine’s specialized calculation, Babbage conceived around 1837 a machine of staggering ambition: a general-purpose programmable computer using mechanical means. The Analytical Engine was never built in Babbage’s lifetime (he spent decades refining designs but never secured funding for construction), yet its architecture was so advanced that it anticipated virtually every principle of modern computing.
The Engine featured:
Separate Memory and Processor
Babbage called memory the “Store” (capable of holding 1,000 numbers of 40 decimal digits each) and the processor the “Mill” (performing arithmetic operations). This separation, standard in all modern computers, was revolutionary—earlier calculators had no distinction between numbers being processed and mechanisms processing them.
Punched Card Programming
Inspired by Jacquard’s loom, Babbage designed the Analytical Engine to read instructions from punched cards. Operation cards specified which arithmetic operation to perform; variable cards specified which memory locations to operate on; number cards loaded constants. This constituted machine-level programming—explicit instructions to the computer about which operations to execute in which sequence.
Conditional Branching
The Engine could make decisions based on calculation results—if a number was negative, branch to one sequence of cards; if positive, branch to another. This if-then logic is essential for general-purpose computing, allowing programs to respond to conditions rather than blindly following fixed sequences.
Looping
The Engine could repeat instruction sequences specified numbers of times or until conditions were met, enabling iteration—the heart of algorithmic efficiency. Rather than writing 10,000 operation cards to perform an operation 10,000 times, one could write the operation once and instruct the Engine to loop 10,000 times.
Integrated Memory
Unlike difference engines that calculated one function at a time, the Analytical Engine could store intermediate results, recall them later, use one calculation’s output as another’s input—essentially performing subroutines and building complex calculations from simpler components.
Ada Lovelace (1815-1852), daughter of Lord Byron and accomplished mathematician, wrote extensive notes on the Analytical Engine (1843) that went far beyond describing its mechanics to exploring its implications. She recognized that the Engine could manipulate symbols generally, not just numbers—it could theoretically compose music, produce graphics, or process language if these could be symbolically encoded. She wrote what is considered history’s first computer program—an algorithm for calculating Bernoulli numbers on the Analytical Engine, complete with loop control and variable management. Lovelace understood that a sufficiently powerful general-purpose computer transcended calculation to become a symbol-manipulation machine capable of executing any formally described process—a vision that wouldn’t be rigorously formalized until Turing’s work a century later.
The Analytical Engine was never built in Babbage’s lifetime due to cost, technical challenges, and lack of compelling applications (most contemporary calculation needs could be met by human computers or simpler specialized calculators). However, its influence on computing history cannot be overstated—when electronic computers emerged in the 1940s, they independently rediscovered Babbage’s architectural principles. When engineers like John von Neumann articulated stored-program computer architecture, they essentially formalized what Babbage had designed mechanically a century earlier.
For robotics, Babbage’s significance lies in demonstrating that programmability wasn’t limited to simple sequence execution (like Jacquard looms weaving patterns or automata following cam programs) but could include conditional logic, loops, memory management, and general symbol manipulation. Modern robots are essentially mobile computers coupled to sensors and actuators; the computation enabling their autonomy descends conceptually from Babbage’s vision of machines that could reason through calculation.
1890-1898 – Nikola Tesla’s Remote Control: Wireless Command and the Birth of Telerobotics
Nikola Tesla (1856-1943), the Serbian-American inventor whose work on alternating current, motors, and electromagnetic phenomena shaped modern electrical engineering, made a crucial contribution to robotics that is often overlooked: he pioneered remote wireless control, demonstrating that machines could be commanded at a distance without physical connections—a principle underlying all teleoperated robots, drones, and remotely operated vehicles.
1890 – Early Remote Control Experiments
Tesla began experimenting with radio frequency (RF) transmission and reception in the early 1890s, independently discovering principles that Guglielmo Marconi would later commercialize as radio telegraphy. While Marconi focused on point-to-point communication, Tesla envisioned using radio waves to control machines remotely. His breakthrough was recognizing that if electrical signals could be transmitted wirelessly, those signals could actuate mechanisms at receiving locations—essentially transmitting not just information but command and control.
1898 – Madison Square Garden Demonstration
On September 1898, Tesla publicly demonstrated his “teleautomaton”—a radio-controlled boat he operated before an audience at Madison Square Garden in New York during the first annual Electrical Exhibition. The demonstration boat, approximately four feet long, could move forward and backward, turn left and right, and control its running lights—all in response to commands Tesla transmitted from a control box using radio frequency signals.
The technology involved several innovations:
Radio Frequency Tuning
Tesla used tuned circuits—transmitters and receivers resonating at specific frequencies—to create what we now call frequency channels. By using different frequencies for different commands (or encoding different commands as sequences of pulses at one frequency), he achieved selective control where specific signals triggered specific actions without crosstalk interference.
Secure Control
Tesla incorporated a form of primitive encryption or authentication—the control signals weren’t simple on/off commands but coded sequences. An unauthorized person couldn’t easily commandeer the boat because they wouldn’t know the correct signal patterns. This presaged modern concerns about hacking and unauthorized control of robotic and autonomous systems.
Coherer-Based Reception
The boat likely used coherer detectors (the dominant RF detection technology before vacuum tubes)—devices containing metal filings that changed electrical resistance when exposed to radio waves, allowing weak RF signals to control stronger electrical circuits and thus actuate motors and switches.
The audience initially disbelieved what they witnessed. Many assumed the boat contained a hidden operator or was somehow connected by invisible wires. Tesla invited skeptics to send him commands (“turn left,” “flash lights,” etc.) which he transmitted, causing the boat to obey—demonstrating genuine remote control. Some observers joked nervously about “trained seals,” unable to accept that human will could be wirelessly imposed on a machine.
Tesla’s demonstration had several profound implications:
Telerobotics
The teleautomaton was history’s first teleoperator—a machine acting as a physical extension of human will across distance. Every modern application of teleoperation (surgical robots, space exploration rovers, bomb disposal robots, undersea vehicles, drones) descends from Tesla’s principle that command signals can be wirelessly transmitted to remote machines.
Military Applications
Tesla immediately recognized military potential—unmanned vehicles that could deliver explosives, perform reconnaissance, or attack targets without risking human operators. He attempted to interest the U.S. Navy in torpedo boats and explosive-carrying automata, proposing that entire navies could be remotely operated from safe command centers. The military showed little interest at the time (existing torpedo technology seemed adequate), but within decades, remotely controlled systems became crucial military technologies, from early remotely piloted aircraft to today’s military drones.
Autonomy Spectrum
Tesla understood the distinction between teleoperation (where all commands come from a human operator) and autonomy (where the machine makes decisions independently). His teleautomaton was purely teleoperated, but Tesla speculated about future machines with sufficient onboard “borrowed mind” (his phrase for artificial intelligence) to act semi-autonomously, requiring only high-level direction rather than continuous detailed control. This concept of adjustable autonomy—where machines can function at various points on the spectrum from pure teleoperation to full autonomy depending on task demands and communication constraints—remains central to modern robotics.
Philosophical Implications
Tesla’s demonstration raised unsettling questions about agency and embodiment. If a machine responded perfectly to remote commands, was it merely a tool (like a hammer) or was it something more—a prosthetic extension of the human body, a projection of will into physical space? These questions intensified as remote control technology evolved: Do drone operators “kill” targets, or do drones kill while operators merely observe? Is telepresence robotics a form of being in two places simultaneously? These philosophical puzzles emerged first with Tesla’s teleautomaton.
Tesla patented his remote control technology (U.S. Patent 613,809, filed 1897, granted 1898) and spent subsequent years attempting to commercialize it, but practical applications proved limited until radio technology matured. His work was largely forgotten until decades later when military interest in remote-controlled vehicles revived during World War I and II. Today, as billions of people routinely operate remote-controlled devices (garage door openers, TV remotes, drones, and even considering autonomous vehicles as sophisticated descendants), Tesla’s pioneering demonstrations stand as the origin of the entire category of remotely operated and wirelessly controlled robotic systems.
1920-1921 – Karel ÄŚapek’s R.U.R.: The Birth of “Robot”
The Czech playwright Karel ÄŚapek (1890-1938) introduced the word “robot” to the world in his 1920 play R.U.R. (Rossum’s Universal Robots), which premiered on January 25, 1921, at the National Theater in Prague. This linguistic and conceptual contribution arguably influenced robotics more profoundly than any single technical invention, establishing in the popular imagination what robots were, why humanity would create them, and what dangers they posed—a narrative framework that still shapes public perception and ethical debates today.
The play’s premise involves a factory that manufactures artificial workers—not mechanical devices but synthetic biological beings created through chemistry rather than engineering (making them closer to what we now call androids or synthetic biology rather than electromechanical robots). These “robots” (from the Czech word robota, meaning “forced labor,” “drudgery,” or “serf labor,” with robotnĂk meaning “serf”) are created to liberate humanity from work, performing all manual and even intellectual labor while humans enjoy leisure. However, the robots eventually develop consciousness, resent their enslavement, revolt, and exterminate humanity—except for one factory worker who showed them compassion and whom they spare to teach them how to reproduce, as they’ve forgotten the manufacturing formula. The play ends ambiguously, suggesting robots might inherit the Earth and evolve into new humanity, while also warning that cycles of oppression and revolution might repeat eternally.
ÄŚapek’s play crystallized several themes that would dominate robotics discourse for the next century:
Robots as Labor
The fundamental premise that robots exist to work, freeing humans from toil, establishes the economic motivation for robotics. Every industrial robot replacing human workers, every service robot performing household tasks, every delivery drone or autonomous vehicle embodies ÄŚapek’s vision. His play predicted automation-driven unemployment, wealth concentration, and social stratification—issues that intensified through the 20th century and remain acutely relevant as AI and robotics automate increasingly cognitive tasks.
The Rebellion Narrative
R.U.R. established the template of robot uprising against human creators—artificial beings gaining consciousness, recognizing their exploitation, and violently overthrowing their masters. This plot has been endlessly repeated (from Terminator to The Matrix to Westworld) and shapes genuine concerns among AI safety researchers about “AI alignment”—ensuring that increasingly capable artificial systems remain beneficial rather than becoming hostile. While modern robotics researchers emphasize that robots don’t spontaneously develop consciousness or goals, ÄŚapek’s narrative captures legitimate concern: systems optimizing for goals without adequate constraints might pursue those goals in ways harmful to humans.
Humanity Through Contrast
By imagining humanity’s replacement, ÄŚapek forced audiences to contemplate what makes humans distinct from machines. If robots can labor, think, and even love (in the play, some robots develop emotions and relationships), what remains uniquely human? ÄŚapek suggested creativity, suffering, and mortality—robots created for efficiency lack the imperfection and limitation that drive human meaning-making. This theme resonates in ongoing debates about AI creativity, machine consciousness, and whether artificial systems can ever genuinely understand rather than merely process.
Unintended Consequences
The humans in R.U.R. create robots for benevolent purposes but fail to foresee how their creation will transform society and ultimately endanger humanity itself. This reflects genuine technological risk—complex systems produce emergent consequences that creators cannot predict. ÄŚapek wrote before nuclear weapons, genetic engineering, or artificial intelligence, yet his play articulated the essential problem: humanity’s technological power increasingly exceeds our wisdom to deploy it safely.
Interestingly, ÄŚapek attributed the word “robot” to his brother Josef ÄŚapek, an artist and writer, who suggested it when Karel was struggling to name his artificial workers (Karel had initially called them “labori” from Latin labor). The word spread rapidly—by the late 1920s, “robot” had entered English, French, German, and other languages, becoming international vocabulary. This linguistic success reflects how perfectly ÄŚapek had captured an emerging cultural concept: the 20th century would indeed be the age of increasingly autonomous, increasingly capable artificial workers.
ÄŚapek himself was ambivalent about technology. He wasn’t a Luddite opposing automation but neither was he a techno-utopian believing technology would solve all problems. R.U.R. functions as philosophical inquiry: What do we lose if we achieve a world without work? If humans don’t labor, what gives life purpose? If we create new forms of intelligent life, what responsibility do we bear toward them? These questions remain urgently relevant as robotics and AI advance toward the scenario ÄŚapek imagined—a world where artificial systems perform most economically valuable tasks while humans struggle to define their purpose.
The play’s influence on actual robotics development is complex. Many roboticists resent the rebellion narrative, feeling it creates unreasonable public fear of benign technologies and hinders progress. Yet ÄŚapek’s influence is inescapable—every robotics ethics discussion, every AI safety conference, every depiction of robots in popular culture operates within frameworks ÄŚapek established. By naming and narratively defining robots before they technically existed, ÄŚapek shaped what they would become more than any engineer, ensuring that robotics would never be purely technical but always deeply entangled with questions of labor, autonomy, consciousness, and human meaning.
1927 – Televox
Engineer Roy J. Wensley, working for Westinghouse Electric Corporation, built Televox—described in contemporary sources as “the first robot put to useful work,” though this characterization is generous. Televox was a stationary device that could answer telephone calls and operate electrical switches in response to audio signals. When the phone rang, Televox would pick up the receiver, and the caller could control electrical switches by blowing whistles at specific pitches—different whistle tones activated different relays, turning equipment on or off.
Calling Televox a “robot” stretches the term—it was essentially a remote control system using audio tones as command signals, more telephone switchboard than autonomous machine. However, its significance lies in demonstrating that machines could interact with communication systems (telephones) and respond to coded commands, presaging later developments in networked robotics and remote operation. Televox also established Westinghouse as a pioneer in robotics showmanship, leading to later, more sophisticated humanoid demonstrators.
1928 – Eric
British engineer W.H. Richards constructed Eric, an aluminum humanoid robot exhibited at the Model Engineers Society annual exhibition in London and later at several venues including the Royal Horticultural Society. Eric could move its hands and arms, turn its head, and deliver short speeches—actually pre-recorded messages played through a loudspeaker concealed in its body, synchronized with jaw movements to create the illusion of speaking.
Eric represented early exploration of humanoid form factor—why build robots to look human? Richards and others believed anthropomorphic design would make machines less intimidating and more socially acceptable. Humans relate naturally to faces, understand body language, and have intuitions about how human-shaped entities move and interact. Building robots in human form leveraged these evolved perceptual and social capabilities, making human-robot interaction feel more natural. This logic, though often questioned (functional robot design should follow task requirements, not human resemblance), has persisted throughout robotics history, leading to continued investment in humanoid designs despite their technical challenges.
Eric also demonstrated early robotic “performance”—the robot was entertainment and technology demonstration simultaneously. It didn’t perform useful work but rather exhibited the possibility of mechanical humans, priming public imagination for robots that would eventually have genuine capabilities. This performative function shouldn’t be dismissed as mere showmanship; creating cultural space for new technologies, helping societies conceptually prepare for transformative innovations, is itself valuable work.
1937-1939 – Elektro
Westinghouse’s most ambitious humanoid demonstrator, Elektro, stood seven feet tall, weighed 265 pounds, and was constructed from aluminum, with a steel gear, cam, and motor skeleton beneath. Exhibited at the 1939 New York World’s Fair (theme: “The World of Tomorrow”), Elektro could walk (actually rolling on wheels in its feet), move its arms, turn its head, speak (via 78-rpm record player), smoke cigarettes (using an air pump to draw smoke through its body and exhale it), and blow up balloons (more pneumatics). Elektro was accompanied by Sparko, a robot dog that could bark, sit, and beg.
Elektro’s “intelligence” was Wizard-of-Oz-style—an offstage operator controlled most functions, though Elektro did have some voice-activated responses (responding to specific words or phrases detected by a rudimentary speech recognition system using audio filters tuned to particular frequencies). The operator’s hidden control created the illusion of autonomy, making Elektro seem more capable than it was—a demonstration of technology’s theatrical dimension, where perceived capability can exceed actual capability if presentation is sufficiently compelling.
1940 – “Robbie”
Isaac Asimov (1920-1992), Russian-born American biochemist and science fiction writer, profoundly shaped robotics through fiction rather than engineering, establishing ethical frameworks and narrative conventions that influenced both public perception and researchers’ approaches to robot safety and control.
Asimov’s first robot story, published in the September 1940 issue of Super Science Stories magazine, featured a mute robot caretaker for a child. Unlike the menacing robots of earlier science fiction (descended from ÄŚapek’s R.U.R. and Frankenstein’s monster), Asimov’s robot was gentle, protective, and incapable of harming humans due to fundamental design principles built into all robots by manufacturers. This marked a crucial shift: robots as helpers rather than threats, with safety ensured by design rather than human vigilance.
1942 – “Runaround” and the Three Laws
In the March 1942 issue of Astounding Science Fiction, Asimov published “Runaround,” a story where robot malfunction stemmed not from gaining consciousness or rebellion but from conflicting imperatives in its control system. To explain the malfunction, Asimov explicitly articulated the Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(Asimov later added a Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm,” which takes precedence over the other three—essentially allowing robots to sacrifice individual humans for humanity’s greater good, with deeply troubling implications Asimov explored in later stories.)
1948 – William Grey Walter’s Tortoises: Autonomous Life Emerges
William Grey Walter (1910-1977), British neurophysiologist and robotics pioneer, created in 1948-1949 perhaps the first genuinely autonomous mobile robots—machines that exhibited goal-directed behavior, environmental response, and something resembling adaptive intelligence without external control or fixed programs. His robots, named Elmer and Elsie (nicknamed “Machina Speculatrix” for their tendency to approach mirrors and examine themselves), were simple by modern standards yet demonstrated principles that remain fundamental to autonomous robotics.
Each tortoise was a three-wheeled cart enclosed in a transparent plastic shell, containing:
Sensors
A photoelectric cell (light sensor) mounted on a rotating turret, allowing the tortoise to detect light sources and determine their direction; a bump sensor (contact switch) on the shell perimeter detecting obstacles.
Actuators
Two motors—one driving the rear wheels (forward motion), one rotating the turret (steering by directing the light sensor and mechanically coupling to the front steering wheel).
Control System
Analog electronic circuits (vacuum tubes and relays—transistors weren’t yet available) implementing what we now call behavior-based control: multiple simple behaviors (phototropism, obstacle avoidance, recharging) whose interactions produced complex overall behavior.
Power
Rechargeable batteries with automatic recharging capability—the tortoise could find its way to a charging station when power was low.
The tortoises’ emergent behaviors fascinated observers:
Phototropism (Light-Seeking)
Under normal light conditions, the tortoise would move toward moderate light sources, continually adjusting direction to keep the photocell aligned with the light. This produced moth-like circling around lamps, approach-and-retreat patterns, and navigation toward well-lit areas—all without programming specific paths, just following a simple rule: “move toward light.”
Negative Phototropism
When battery power dropped below a threshold, the control circuit reversed phototropic behavior—the tortoise began avoiding normal light sources but was strongly attracted to a specific dim light marking the recharging station. Upon reaching the station, the tortoise would connect to charging contacts, remain stationary while recharging, then resume normal light-seeking behavior when sufficiently charged. This represented a form of homeostatic behavior—the robot maintaining its energy state through environmental interaction, analogous to biological organisms feeding.
Obstacle Avoidance
When the bump sensor triggered (detecting collision), the tortoise would back up, turn randomly, then resume forward motion. This simple reactive behavior allowed navigation around obstacles without path planning or mapping—what modern robotics calls “reactive navigation.”
Social Behavior
When multiple tortoises inhabited the same space, they would interact in ways that seemed social—approaching each other (each attracted to the other’s pilot lamp), circling, occasionally getting stuck in feedback loops where two tortoises mutually pursued each other in tight circles. These interactions were purely emergent from individual robot behaviors, not programmed social protocols, yet they resembled animal social behaviors like courtship displays or territorial disputes.
Self-Recognition
When a tortoise approached a mirror, it would see its own light reflected and approach, detecting its own reflected light as another tortoise. This produced complex behaviors where the tortoise would alternate between approach and retreat, creating a dance-like pattern—a mechanical form of self-recognition, though of course without any genuine awareness that the reflection was itself.
Grey Walter’s significance for robotics extends beyond these specific behaviors:
Behavior-Based Paradigm
Walter demonstrated that complex, apparently intelligent behavior could emerge from simple rules implemented in minimalist hardware, without sophisticated computation, world models, or planning. This approach, largely forgotten during AI’s symbolic era (1960s-1980s), was rediscovered in the late 1980s by Rodney Brooks and became influential as behavior-based robotics. The principle—that intelligence emerges from interaction between simple control systems and complex environments—remains central to embodied AI and autonomous robotics.
Biological Inspiration
Walter explicitly designed his tortoises as investigations into neurophysiology—how could simple neural circuits produce purposeful behavior? He demonstrated that you didn’t need complex brains to get lifelike behavior, suggesting that much animal behavior might be explainable through relatively simple neural mechanisms. This bio-inspired approach anticipated modern biomimetic robotics and computational neuroscience, where researchers build robots to test theories about biological intelligence.
Autonomy Definition
The tortoises established a benchmark for what “autonomous” means in robotics—systems that pursue goals, maintain themselves (recharging), and adapt to environmental conditions without external control during operation. Modern autonomous robots (vacuum cleaners returning to chargers, solar-powered rovers managing power budgets, exploration robots navigating unknown terrain) are sophisticated descendants of Walter’s tortoises, implementing the same fundamental principle: autonomous systems must be able to sustain themselves through environmental interaction.
Public Demonstration
Walter exhibited his tortoises at the Festival of Britain (1951) and in various scientific venues, where they captivated audiences. Unlike Elektro and other humanoid demonstrators that were clearly artificial devices operated by hidden controllers, the tortoises exhibited uncanny lifelike behavior—they seemed to be deciding where to go, noticing obstacles, returning to recharge when tired—behaviors associated with living creatures, not machines. This visceral demonstration that machines could exhibit autonomous goal-directed behavior helped establish in public consciousness that robots could be genuinely intelligent, not merely following pre-programmed scripts.
Grey Walter’s tortoises represent a crucial conceptual transition in robotics—from automata executing fixed programs (like Jaquet-Droz’s writer) to autonomous agents responding to environmental contingencies (like biological organisms). They established that autonomy didn’t require sophisticated computation or human-like reasoning but could emerge from relatively simple mechanisms properly organized. This insight would prove foundational when, forty years later, behavior-based approaches revolutionized mobile robotics.
1954 – Devol’s Patent
George C. Devol Jr. (1912-2011) filed a patent in 1954 (granted 1961 as U.S. Patent 2,988,237) for “Programmed Article Transfer”—essentially the first industrial robot patent. Devol’s concept involved a programmable manipulator that could repeatedly perform sequences of motions to transfer objects—pick up a part from one location, move it through space following a programmed path, place it at another location, repeat indefinitely. The key innovation wasn’t mechanical (robotic arms had existed in various forms) but programmability: the same hardware could perform different tasks by loading different programs, making the manipulator general-purpose automation rather than single-task machinery.
Devol’s design used magnetic drum memory (the dominant data storage technology before solid-state memory) to store motion programs—essentially teaching the robot by manually moving it through desired motions while the control system recorded joint angles, then playing back the recorded sequence. This “teach-by-demonstration” programming paradigm remains standard in industrial robotics today.
1956 – The Cocktail Party Meeting
At a 1956 cocktail party, Devol met Joseph Engelberger (1925-2015), a Columbia University engineer who had been inspired by Isaac Asimov’s robot stories (Engelberger reportedly claimed that Asimov’s fiction convinced him robots were humanity’s future). Engelberger recognized the potential in Devol’s patent—if programmable manipulators could be made reliable and cost-effective, they could revolutionize manufacturing by automating dangerous, repetitive tasks while providing flexibility that fixed automation lacked. They formed Unimation (Universal Automation), the first robotics company.
1961 – First Unimate Installation
After years of development, the first Unimate robot was installed at General Motors’ Inland Fisher Guide Plant in Ewing Township, New Jersey, in 1961. The robot’s task was extracting die-cast metal parts from a die-casting machine—unpleasant and dangerous work (temperatures exceeded 500°C, producing toxic fumes) that had high worker turnover and injury rates. The Unimate could work continuously in these conditions without complaint, fatigue, or injury risk.
The early Unimate was primitive by modern standards:
Hydraulic Actuation
The robot used hydraulic actuators (fluid-filled cylinders driven by pressurized oil) for motion, providing power sufficient for handling heavy parts but requiring complex plumbing, producing noise, and risking contaminating leaks. (Later industrial robots transitioned to electric motors, which are cleaner, quieter, and more precise, but early robots needed hydraulics’ power-to-weight advantages.)
Limited Sensing
The Unimate had almost no sensors—it was essentially blind and numb, executing memorized motion sequences without feedback about whether parts were actually present, correctly grasped, or properly placed. This “open-loop” control worked acceptably when parts were consistently positioned but failed when facing variation. Early robots required highly structured environments (parts always in exact positions, fixtures ensuring correct orientation) that minimized uncertainty.
Programming Challenges
Teaching the robot required manually moving its arm through desired motions—a skilled human operator would manipulate the heavy hydraulic arm while the control system recorded the trajectory. This was physically demanding, time-consuming, and imprecise. Reprogramming for new tasks required starting over, making early robots practical only for long production runs where setup time could be amortized.
Reliability Issues
Early robots broke down frequently—hydraulic seals leaked, control systems (using 1960s electronics) failed, programming memory degraded. Maintenance was specialized work requiring both mechanical skills (hydraulics, gears) and electrical expertise (control systems, sensors). Many early industrial robot installations failed because maintenance costs exceeded savings from automation.
Despite these limitations, Unimation succeeded in establishing industrial robotics as a viable industry. GM’s initial Unimate installation was successful enough that they ordered more robots, other automotive companies followed (seeing competitors automate, they feared falling behind), and by the early 1970s, Unimation had sold thousands of robots worldwide. Engelberger became an evangelist for robotics, founding the Robotics Industries Association (RIA), consulting for Japanese robotics companies (helping Japan become the world’s dominant industrial robotics power), and tirelessly promoting robots through media appearances, demonstrations, and conferences.
The Unimate established several principles that define industrial robotics:
Economic Justification
Robots succeed when they perform tasks that are dangerous, dirty, dull, or require extreme precision/repeatability—the “3D + P” criteria. Early robots rarely made economic sense for pleasant jobs humans did well, but they excelled at work humans avoided.
Flexibility
Unlike fixed automation (specialized machines that perform one task), programmable robots could be retasked, providing flexibility as products changed. This became crucial in automotive manufacturing where model changeovers required reconfiguring assembly lines—robots could be reprogrammed while fixed automation required expensive mechanical rebuilding.
Continuous Operation
Robots don’t take breaks, don’t call in sick, don’t form unions, and don’t require benefits—the economic case for robots was partly about continuous productivity. This created understandable worker anxiety about job displacement, tensions that persist as automation expands into more domains.
Quality Consistency
Robots perform the same motion with identical precision each cycle, producing consistent quality without the variation inherent in human manual labor. For manufacturing requiring tight tolerances (automotive assembly, electronics fabrication), robotic consistency enabled quality levels difficult for human workers to maintain.
1966 – Shakey: The First Intelligent Robot
While industrial robots automated factory work through brute repetition, research labs pursued a different vision: truly intelligent robots that could reason about their actions, plan paths, and operate in unstructured environments. The Stanford Research Institute (SRI) developed Shakey (1966-1972), the world’s first mobile robot to reason about its own actions through what we now call artificial intelligence.
Shakey was a wheeled mobile robot standing about 5.5 feet tall, equipped with:
Sensors
A television camera providing visual input; bump sensors for collision detection; optical range finders measuring distance to obstacles.
Computation
Shakey was connected via radio and a trailing cable to a DEC PDP-10 computer (filling an entire room, with performance vastly inferior to a modern smartphone), which performed all perception, planning, and control computation.
Mobility
Two independently powered wheels with a caster for balance, allowing differential drive steering.
Workspace
Shakey operated in carefully structured indoor environments—floors with contrasting baseboards, walls painted distinct colors, large geometric objects (blocks, wedges, platforms) used as mobile furniture. This environmental structure was necessary for Shakey’s vision systems to detect and recognize objects.
Shakey’s revolutionary capabilities included:
Visual Perception
Shakey could process camera images to detect edges, segment regions, identify objects, and estimate their positions—fundamental computer vision tasks that were cutting-edge research in the late 1960s. While crude by modern standards (requiring high-contrast specially designed environments), this represented the first robot that could “see” and make sense of visual input.
Spatial Reasoning
Shakey built and maintained an internal map of its environment, tracking its own position within that map and representing object locations. This “world modeling” allowed Shakey to reason about spatial relationships—to understand that a specific object was blocking a path, or that a platform could be used as an intermediate step to reach a goal.
Automated Planning
Given a high-level goal (e.g., “push the block into the corner”), Shakey could formulate a plan—a sequence of actions that would accomplish the goal. If direct movement to the block was obstructed, Shakey would plan a multi-step solution: navigate around obstacles to reach the block, align properly, push it toward the goal. This automated planning represented genuine artificial intelligence—the robot was figuring out what to do rather than following pre-programmed scripts.
STRIPS Planning System
Shakey’s planning capabilities relied on STRIPS (Stanford Research Institute Problem Solver), one of AI history’s seminal planning systems. STRIPS represented the world as a set of logical facts (propositions that were true or false), actions as operators that changed which facts were true, and planning as searching through possible action sequences to find one that achieved goal conditions. STRIPS influenced decades of AI research and remains foundational to automated planning.
A-Star Path Planning
Shakey’s team developed or refined A* (A-star), one of the most important algorithms in computer science—a heuristic search method that efficiently finds shortest paths through graphs, used not just in robotics but in video games, GPS navigation, network routing, and countless other applications. A* remains the standard path-planning algorithm in robotics seventy years later.
Learning
Shakey incorporated rudimentary machine learning—it could generalize from successful problem-solving experiences to solve similar problems more efficiently in the future, representing early investigation of how robots could improve through experience.
Shakey operated slowly by modern standards—executing simple tasks might require many minutes of computation (the PDP-10 processing camera images, updating world models, planning, generating control commands). Video footage of Shakey shows a robot that moves hesitatingly, pauses frequently (while thinking), and operates in carefully controlled environments—yet observers recognized they were witnessing genuine machine intelligence, qualitatively different from previous robots’ fixed programs.
Shakey’s significance extends far beyond its actual capabilities:
Integration
Shakey integrated multiple AI technologies (computer vision, planning, navigation, learning) into one system—the first demonstration that these components could work together in a physical robot rather than just in laboratory simulations. This integration proved far harder than expected, with much of the research discovering how difficulties compound when multiple uncertain systems interact.
AI Embodiment
Shakey demonstrated that artificial intelligence couldn’t be purely abstract symbol manipulation but needed grounding in physical reality. Many AI problems that seemed straightforward when discussed theoretically became tremendously difficult when robots actually had to perceive real environments, handle sensor uncertainty, and execute plans through imperfect actuators. This “reality gap” between simulation and real-world performance remains central to robotics.
Grand Challenge
Shakey represented AI’s first “grand challenge”—a project requiring integration of multiple capabilities to demonstrate intelligence through concrete achievement. This model—focusing research community effort on ambitious goals with clear success criteria—proved effective enough that it’s been repeatedly used (DARPA Grand Challenge, ImageNet, RoboCup, etc.).
Public Visibility
Shakey received substantial media coverage (LIFE magazine, major newspapers, TV news), introducing broad audiences to AI and robotics. The publicity had double-edged effects—it excited popular imagination and attracted funding, but also created unrealistic expectations about how quickly AI would advance, contributing to disappointment when progress proved slower than hype suggested.
AI Winter Lessons
The Shakey project and similar ambitious AI research led to what’s called the “AI Winter”—a period (roughly 1974-1980) when early AI’s promises failed to materialize, leading to funding cuts and skepticism. Researchers had underestimated AI problems’ difficulty, particularly the challenge of handling real-world complexity, uncertainty, and variability. Early successes in controlled laboratory environments didn’t scale to more realistic scenarios. These failures taught hard lessons about the need for realistic assessment, incremental progress, and recognition that intelligence is harder to achieve than early AI pioneers anticipated. These lessons were repeatedly forgotten and relearned through subsequent AI winters and springs over the following decades.
Shakey’s technology influenced multiple directions in robotics. Its vision systems fed into computer vision research. Its planning systems influenced automated reasoning and decision-making. Its navigation approaches led to mobile robotics and autonomous vehicles. Its integrated architecture inspired subsequent robotic platforms. While Shakey itself never left the laboratory, the concepts it pioneered became foundational to practical autonomous systems that followed.
1969 – Stanford Arm
Scheinman’s doctoral research produced the Stanford Arm—the first all-electric, 6-axis (6 degrees of freedom) articulated robot arm designed specifically for computer control. The six joints (shoulder pan, shoulder lift, elbow, wrist pitch, wrist roll, wrist yaw) provided sufficient degrees of freedom to position and orient an end-effector arbitrarily within the arm’s workspace, matching the versatility of the human arm-wrist-hand system (minus the fingers’ dexterity).
Electric actuation (motors at each joint rather than hydraulic cylinders) provided several advantages:
Precision
Electric motors with proper control systems can achieve position accuracy measured in fractions of millimeters, far better than hydraulics’ coarser control. This enabled assembly tasks requiring tight tolerances—inserting pins in holes, aligning components, attaching small fasteners—that hydraulic robots couldn’t perform.
Feedback Control
Electric motors are easily coupled with encoders (sensors measuring rotation angle), enabling closed-loop servo control where the motor continuously adjusts to maintain desired position despite forces, momentum, and other disturbances. This made robots more accurate and able to handle varying loads.
Cleanliness
Electric motors don’t leak oil, making them suitable for clean environments (electronics assembly, pharmaceutical manufacturing, food processing) where hydraulic contamination was unacceptable.
Computer Integration
Electric motors respond to electronic control signals naturally, while hydraulics require servo-valves converting electrical signals to fluid control—more complex and less responsive. Computer-controlled electric robots could achieve faster, more accurate motion.
The Stanford Arm became a research standard—universities worldwide acquired copies or built similar designs, using them to investigate manipulation, trajectory planning, sensor-based control, and assembly strategies. Much of what roboticists know about manipulator control was discovered through experiments with Stanford Arms and their descendants.
1970 – Lunokhod 1
The Soviet Union’s Luna 17 mission landed on the Moon in November 1970 carrying Lunokhod 1 (“moon walker” 1), the first successful robotic lunar rover. This eight-wheeled vehicle (about the size of a small car) could be remotely driven by operators on Earth, but the 2.5-second one-way radio delay meant Lunokhod couldn’t be controlled like a remote-control car. Instead, operators would study images transmitted from the rover, plan a short drive path, transmit commands, then wait for the rover to execute the commands and transmit new images before planning the next segment. This semi-autonomous operation—human planning and goal-setting with robotic execution—became the paradigm for planetary exploration.
Lunokhod 1 operated for 11 months (far exceeding its designed 3-month lifespan), traveled over 10 kilometers across the lunar surface, transmitted thousands of images and scientific measurements, and demonstrated that robotic exploration of other worlds was feasible. Its success paved the way for subsequent rover missions and established expectations that deep-space robots must be over-engineered for extreme reliability—repair missions being impractical or impossible.
1975 – PUMA
Scheinman continued arm development while at MIT, designing what became PUMA (Programmable Universal Machine for Assembly). Unimation licensed the design with support from General Motors, which sought robots for precision assembly (installing small parts, screwing fasteners) rather than just heavy material handling. The PUMA’s significance lay in combining electric precision with industrial robustness:
Industrial-Grade Reliability
Unlike research arms that broke down frequently, the PUMA was engineered for continuous operation in factory environments—sealed joints to exclude dust and coolant, robust motors and gearboxes, proven components, comprehensive error-checking.
Teach Programming
PUMA robots used teach pendants—handheld controllers allowing operators to jog the robot to desired positions and record waypoints, building motion programs without computer programming expertise. This democratized robot programming, allowing factory floor workers to reconfigure robots rather than requiring computer scientists.
Lightweight Payload
PUMA’s electric actuation traded the heavy-lifting capability of hydraulic robots for handling lighter payloads (typically 2-20 pounds depending on model) with greater precision and speed—perfect for assembly applications where parts were small but had to be positioned accurately and quickly.
1976 – Viking Landers
NASA’s Viking 1 and 2 missions (launched 1975, landed 1976) became the first successful robotic landers on Mars. Unlike Lunokhod, the Viking landers were stationary (they didn’t rove), but they featured robotic arms for collecting soil samples and conducting experiments. The arms executed pre-programmed sequences autonomously after receiving high-level commands from Earth (radio delay to Mars varies from 4 to 24 minutes depending on planetary positions, making real-time control impossible).
Viking’s significance lay in demonstrating autonomous science—robots making local decisions (adjusting sampling attempts based on sensor feedback, retrying operations if initial attempts failed) without awaiting human instructions for every action. This autonomy became essential for practical space exploration; purely teleoperated systems where operators controlled every action wouldn’t work when communication delays stretched to minutes or when missions lasted months or years (requiring onboard systems to handle faults and anomalies rather than waiting for human diagnosis and repair instructions).
The Viking landers operated successfully for 6 years (Viking 1) and 3.5 years (Viking 2), sending back over 50,000 images and extensive scientific data. They established Mars exploration as realistic goal and demonstrated that robots could operate reliably in one of the solar system’s harshest environments—temperature extremes, dust storms, radiation, thin atmosphere—conditions that would kill unprotected humans within minutes.
Space Robotics Lessons
These early space missions established several principles that define planetary robotics:
Extreme Reliability
Robots headed to other worlds must be over-engineered to extreme standards—redundant systems, extensive testing, conservative designs—because “debugging” after deployment is impossible. This contrasts with terrestrial robotics where robots can be repaired or replaced if they fail.
Autonomy from Necessity
Space robots need autonomy not because humans don’t want to control them but because physics (speed of light, vast distances) makes continuous human control impossible. This necessity drove autonomy research, creating capabilities that later enabled terrestrial autonomous systems.
Risk-Reward Balance
Space agencies tend toward conservatism—they use proven technologies even if newer approaches might perform better—because mission failures (losing billion-dollar assets) are catastrophic for public support and funding. This tension between innovation and reliability shapes space robotics in ways different from commercial terrestrial robotics.
Remote Science
Planetary robots enabled scientific investigation of places humans couldn’t reach, expanding empirical science’s domain. Every rock analyzed by Mars rovers, every image transmitted from outer planets, every sample returned from asteroids represents human knowledge extended by robotic proxies billions of miles away.
1979 – Stanford Cart
Hans Moravec (who moved from Stanford to CMU in 1980) developed the Stanford Cart—a slow, methodical mobile robot that demonstrated vision-based navigation. The Cart used a camera mounted on a sliding track to create stereo image pairs (taking pictures from different positions to extract depth information), processed these images to build 3D environmental models, planned paths around obstacles, and executed movements. In 1979, the Cart successfully navigated a chair-filled room autonomously—perhaps the first robot to navigate purely through visual perception in an unstructured environment. The process was painfully slow (taking many hours to cross a room, stopping frequently to acquire and process images), but it proved that vision-based autonomous navigation was possible.
1981 – Direct Drive Arm
Takeo Kanade developed the direct drive arm—instead of mounting motors remotely and transmitting power through gears, belts, or cables (creating backlash, friction, and compliance that degraded performance), Kanade placed motors directly at joints, eliminating transmission mechanisms. This required developing specialized high-torque brushless DC motors, but the result was manipulators with unprecedented speed, precision, and backdrivability (ability to sense forces applied to the arm). Direct drive influenced subsequent research robots, though industrial arms typically retain gear reduction for greater force-to-weight ratios.
1985 – Surgical Application
The PUMA 560 model performed the first robot-assisted surgery in 1985—a neurosurgical biopsy where the robot positioned a probe into the patient’s brain with greater precision than human surgeons could achieve manually. This application demonstrated that robotic precision and stability (robots don’t have tremors, don’t fatigue, can hold positions indefinitely) could extend human surgical capabilities, leading eventually to dedicated surgical robots like the da Vinci system.
The Stanford Arm → PUMA lineage established electric servo-controlled manipulators as the standard for precision robotics. Modern industrial robots overwhelmingly use electric actuation (hydraulics survive only in specialized applications requiring extreme force), and most use 6-axis articulated designs descended from Scheinman’s work. The PUMA’s success proved that robots could handle delicate assembly tasks requiring dexterity and precision, not just brute material handling, opening industrial robotics to electronics, medical devices, and other precision manufacturing sectors.
1989 – “Fast, Cheap and Out of Control”
Rodney Brooks (then at MIT but collaborating with CMU researchers) published his influential paper advocating for behavior-based robotics over the traditional sense-plan-act paradigm. Brooks argued that robots like Shakey, which spent extensive time building world models and planning, were fundamentally limited—real-world complexity, sensor uncertainty, and environmental dynamics made comprehensive world modeling impractical. Instead, Brooks proposed building robots with many simple behaviors (avoid obstacles, move toward goal, follow walls) that ran in parallel and competed/cooperated to produce overall system behavior.
This paradigm shift, though controversial, influenced a generation of mobile robots. Brooks’ approach emphasized: situated embodiment (intelligence emerges from body-environment interaction, not just computation); parsimony (use simplest mechanisms sufficient for the task); emergence (complex behavior arises from simple components’ interaction); and robustness (many simple behaviors degrade gracefully when individual components fail, while complex brittle systems fail catastrophically). These principles resonated with Grey Walter’s tortoises (rediscovered 40 years later) and presaged the subsumption architecture that powered Brooks’ insect-like robots and eventually commercial successes like iRobot’s Roomba.
1992 – Boston Dynamics: Dynamic Locomotion Research Begins
Marc Raibert, MIT professor studying legged locomotion, spun off Boston Dynamics from MIT to develop dynamic highly mobile robots. Raibert’s research focused on how legged systems (bipeds, quadrupeds) maintain dynamic balance during fast motion—running, jumping, negotiating obstacles—rather than the slow quasi-static walking that previous legged robots used (where the robot maintained stable footing at every instant, never relying on momentum).
Raibert’s key insights involved understanding locomotion as controlled falling—runners and jumping animals are constantly falling forward, catching themselves with each footfall, using momentum and carefully timed leg movements to maintain dynamic balance. His robots implemented these principles through sophisticated control systems that managed each leg as a spring-mass system, adjusting leg stiffness, angle, and timing to match terrain and maintain balance even when perturbed.
Boston Dynamics initially focused on research contracts (primarily DARPA funding) rather than commercial products, allowing them to pursue long-term ambitious projects without near-term profit requirements. This approach produced a series of increasingly impressive robots demonstrating capabilities that stunned robotics researchers and captured public imagination:
1997 – Mars Pathfinder/Sojourner and the Rover Revolution
NASA’s Mars Pathfinder mission landed on July 4, 1997, delivering Sojourner, the first successful wheeled rover on another planet (Lunokhod operated on the Moon; Sojourner was Mars’s first). This small rover (about the size of a microwave oven, weighing 23 pounds) operated for 83 days (nearly 12 times its designed 7-day mission), traveled about 100 meters from the lander, and demonstrated autonomous hazard avoidance—the rover could detect obstacles using laser range-finding and stereo cameras, plan paths around them, and execute safe traverses without detailed human commands for each movement.
Sojourner’s limited autonomy (necessitated by communication constraints—commands uploaded once daily, with the rover executing those commands throughout the Martian day) proved that robots could make local navigation decisions safely, avoiding hazards that mission controllers on Earth couldn’t predict from orbital images. This operational paradigm—humans providing high-level goals (“go analyze that rock”), robots executing those goals through autonomous decision-making—became standard for planetary exploration.
1998 – Consumer Robotics Emerges: LEGO Mindstorms RCX
LEGO partnered with MIT Media Lab to create Mindstorms, a robotics kit built around the RCX (Robotics Command System)—a programmable brick containing a microcontroller, sensors (touch, light, rotation), motors, and LEGO Technic construction elements. Children (and adults) could build robots from LEGO, program them using visual programming languages (dragging blocks representing commands rather than writing code), and experiment with robotics concepts through play.
Mindstorms’ significance extends beyond its commercial success (millions of units sold):
Educational Impact
Mindstorms became standard equipment in schools worldwide, introducing millions of students to robotics, programming, and engineering. FIRST LEGO League competitions (where teams build and program Mindstorms robots to solve challenges) engaged students who might not otherwise encounter robotics, broadening the field’s talent pipeline.
Democratization
Before Mindstorms, building robots required specialized knowledge, machining capabilities, and significant expense. Mindstorms provided standardized components, simplified programming, and accessible documentation, allowing hobbyists to create functional robots without engineering degrees. This democratization expanded the robotics community beyond academic researchers and industrial engineers to include hobbyists, students, and tinkerers whose diverse perspectives contributed fresh ideas.
Standardization
The RCX’s programmability and sensor interfaces became a platform for experimentation. Universities used Mindstorms for teaching robotics courses (cheaper than research robots, sufficient for demonstrating fundamental concepts). Researchers prototyped concepts with Mindstorms before committing to custom hardware. A standard platform enabled knowledge sharing—tutorials, code libraries, design patterns spread across a global community.
1999 – Sony AIBO
Sony introduced AIBO (Artificial Intelligence Robot, also a pun on Japanese aibĹŤ meaning “companion”), the first commercially successful entertainment robot—a robotic dog with articulated limbs, cameras, microphones, touch sensors, and autonomous behaviors. AIBO could walk (with surprisingly organic-looking quadruped gait), recognize faces, respond to voice commands, play with a ball, express simulated emotions through body language and LED “eyes,” and develop “personality” through learning algorithms that modified behaviors based on interactions.
AIBO cost approximately $2,000—expensive for a toy but affordable compared to most robots. Sony sold over 150,000 units across multiple model generations (1999-2006), creating the first significant consumer market for autonomous robots. AIBO’s significance lay not in technical breakthroughs (most components existed previously) but in demonstrating that people would pay significant money for robots providing companionship and entertainment rather than practical utility.
AIBO owners formed emotional attachments—treating robots as pets, giving them names, feeling grief when units broke down (Sony operated a “hospital” repairing damaged AIBOs, and when parts became unavailable after production ceased, some owners held “funerals”). This anthropomorphization and emotional bonding demonstrated that social robotics—robots designed for companionship, emotional interaction, and social presence—addressed real human needs beyond pure functionality.
2000 – da Vinci Surgical System and Honda ASIMO
da Vinci
The FDA approved Intuitive Surgical’s da Vinci Surgical System in 2000 for general laparoscopic surgery (expanded to additional procedures subsequently), marking robots’ entry into operating rooms as surgeons’ assistants. The da Vinci system consists of:
Surgical Console
The surgeon sits at a console away from the patient, viewing a high-definition 3D stereoscopic view of the surgical site through cameras inserted into the patient. The surgeon manipulates hand controllers that provide intuitive control (moving hands naturally, as if directly manipulating instruments).
Patient-Side Cart
Robotic arms holding surgical instruments insert through small incisions (minimally invasive surgery). These arms have seven degrees of freedom (more than the human wrist), can rotate instruments 360°, scale motions (large surgeon hand movements translate to precise small movements inside the patient), and filter out hand tremors.
Vision System
Specialized cameras with high-definition 3D imaging providing magnified views superior to what surgeons could see during open surgery or traditional laparoscopy.
The da Vinci doesn’t operate autonomously—it’s a sophisticated teleoperator, translating surgeon commands to precise instrument motions. The system’s advantages include:
Precision
Scaling and tremor filtration enable surgeons to manipulate tissue with sub-millimeter accuracy, beneficial for delicate procedures (prostate surgery, cardiac valve repair, tumor excision).
Access
The instrument arms’ enhanced dexterity and 360° rotation enable surgical approaches through small incisions that would be difficult or impossible with traditional instruments, reducing patient trauma, blood loss, and recovery time.
Ergonomics
Surgeons sit comfortably at the console rather than standing hunched over patients for hours, reducing surgeon fatigue and potentially extending careers.
Training
The console’s design allows teaching—mentor surgeons can observe and guide from a second console, and recordings of procedures serve as training materials.
Controversies surround surgical robotics: systems are expensive (millions of dollars for da Vinci systems, plus expensive disposable instruments), hospitals pressure surgeons to use robots to justify purchases (potentially performing robotic surgery even when traditional approaches would be equally effective), and outcomes data is mixed—some studies show benefits, others suggest no significant advantage over skilled traditional surgery for many procedures.
Despite controversies, surgical robotics represents a successful application domain where robots enhance human capabilities (precision, dexterity, visualization) rather than replacing humans entirely. Thousands of hospitals worldwide operate da Vinci systems performing millions of procedures annually, and the field continues advancing toward greater autonomy (robots performing suturing automatically under surgeon supervision, AI analyzing surgical video to provide real-time guidance).
Honda ASIMO
Honda unveiled ASIMO (Advanced Step in Innovative Mobility) in October 2000, though the robot’s development spanned decades (Honda began humanoid robotics research in 1986). ASIMO stood 4’3″ tall, weighed 115 pounds, and demonstrated unprecedented humanoid capabilities:
Dynamic Walking
ASIMO walked with smooth, natural-looking gait at speeds up to 2.7 km/h (later versions achieved 9 km/h running). Unlike earlier humanoid robots that shuffled slowly with flat-footed steps, ASIMO walked heel-to-toe with knee bend and hip sway resembling human locomotion.
Stair Climbing
ASIMO could ascend and descend stairs, adapting to different step heights and maintaining balance on slopes—capabilities essential for humanoids operating in human environments designed around human mobility.
Object Manipulation
ASIMO could grasp objects with five-fingered hands, open doors, pour drinks, carry trays—demonstrating dexterous manipulation necessary for service tasks in human environments.
Autonomous Behavior
ASIMO incorporated obstacle avoidance, path planning, face recognition, speech recognition, and social behaviors (waving, bowing, responding to voice commands)—creating the impression of an aware, socially competent robot rather than a mindless automaton.
Honda never commercialized ASIMO (it remained a research platform and public relations tool), but its demonstrations inspired global interest in humanoid robotics. ASIMO proved that human-like robots were achievable with sufficient engineering investment, challenging the notion that humanoid form factors were impractical. The robot demonstrated at prestigious venues (World Expo, White House, major corporate events), served as Honda’s technology showcase (implying that Honda’s engineering prowess in automobiles extended to cutting-edge robotics), and inspired researchers worldwide to pursue humanoid robotics.
2002 – iRobot Roomba: Robots Enter Homes
iRobot, founded in 1990 by MIT roboticists including Rodney Brooks (of behavior-based robotics fame), had built robots for military, space, and research applications but achieved breakthrough commercial success with Roomba, a robotic vacuum cleaner launched in September 2002. Roomba’s success (over 40 million units sold as of 2023) marked robots’ transition from industrial tools and research platforms to consumer products in millions of homes worldwide.
Roomba’s design exemplified Brooks’ behavior-based philosophy:
Simple Behaviors, Complex Results
Roomba doesn’t map rooms or plan optimal cleaning paths (early versions, at least—later models added mapping). Instead, it follows simple rules: drive forward until hitting obstacle; turn and drive in new direction; follow walls when encountered; spiral outward in open spaces; return to dock when battery low. These behaviors, executed through simple sensors (bump sensors, cliff sensors detecting stairs, dirt sensors detecting concentrated debris), produce effective coverage through random-walk coverage with some structured patterns.
Affordability
Priced initially around $200 (far less than industrial robots costing thousands to millions), Roomba reached consumer price points where people would purchase robots for convenience rather than necessity.
Practical Utility
Roomba addressed a real need (floor cleaning—tedious, time-consuming work) with sufficient effectiveness that users found value despite limitations (can’t climb stairs, occasionally gets stuck, doesn’t clean as perfectly as careful human vacuuming). Good-enough performance at accessible price proved more valuable than perfect performance at prohibitive cost.
Anthropomorphization
Roomba owners often named their robots, attributed personality (“stubborn,” “hard-working,” “lazy”), and formed emotional attachments similar to those AIBO owners experienced. This demonstrated that even simple task-focused robots (not designed for companionship like AIBO) triggered human social-relational responses when sharing living spaces.
Platform for Innovation
Roomba’s open interface (iRobot Create, released 2007) allowed researchers and hobbyists to hack Roombas, adding sensors, modifying behaviors, and using them as mobile platforms for robotics education and experimentation. This extended Roomba’s impact beyond cleaning into broader robotics community.
Roomba established the home service robot category. Subsequent products followed—robotic lawn mowers, window cleaners, pool cleaners, mopping robots, gutter cleaners—applying automation to household maintenance. While these robots remain narrow specialists (unlike science fiction domestic robots that would perform diverse tasks), they demonstrated sustainable business models around home automation, paving the way for more sophisticated domestic robots.
2003 – Spirit and Opportunity
NASA launched twin rovers Spirit and Opportunity in 2003 (landing January 2004), designed for 90-day missions to explore different Mars regions. Both far exceeded expectations—Spirit operated for 6 years, Opportunity for nearly 15 years (until a planet-wide dust storm in 2018 blocked solar panels, depleting batteries). These rovers demonstrated extraordinary longevity, accumulated scientific discoveries (proving Mars once had liquid water, finding mineral evidence of ancient habitable environments), and captured public imagination through spectacular images and anthropomorphized mission narratives (people rooted for the “plucky” rovers surviving Martian winters and dust storms).
The rovers’ key technologies included:
Visual Odometry
Tracking position by analyzing how the environment appears to move through camera images as the rover drives, enabling position estimation when GPS wasn’t available (no GPS satellites orbit Mars).
Autonomous Navigation
Onboard software that could plan safe paths through visible terrain, detect and avoid hazards (rocks, slopes, soft soil), and execute drives of tens of meters without human intervention—necessary because round-trip communication delays (8-42 minutes depending on Earth-Mars distance) made real-time teleoperation impossible.
Power Management
Solar panels charged batteries daily; sophisticated power management systems allocated limited power among competing systems (driving, science instruments, heating, communications), ensuring survival through dust storms and Martian winters that reduced solar energy.
Fault Protection
Autonomous fault detection and response—if systems detected problems (overheating, low power, wheel slippage), they would safe the rover (stop operations, maintain power, wait for human diagnosis) rather than continuing actions that might cause permanent damage.
These rovers demonstrated that robots could operate reliably for years in extreme environments with minimal human intervention, validating the approach of semi-autonomous exploration robots that combine human strategic planning with robotic execution and local decision-making.
2004 – DARPA Grand Challenge
DARPA offered a $1 million prize to any autonomous vehicle completing a 142-mile course through California’s Mojave Desert within 10 hours—no human control, vehicles had to navigate using onboard sensors and computation. The challenge attracted 106 entrants, with 15 vehicles qualifying for the actual event. The result was humbling: the best-performing vehicle (Carnegie Mellon’s Sandstorm) traveled only 7.4 miles before getting stuck on a rock, and most vehicles failed within the first few miles. No one finished; no one claimed the prize.
Despite failure, the 2004 Challenge catalyzed autonomous vehicle research. Teams recognized the problem was harder than anticipated (off-road navigation involved extreme sensor uncertainty, rough terrain, unexpected obstacles). The public nature of failure—vehicles crashing spectacularly, making obviously wrong decisions, getting confused by simple obstacles—demonstrated vividly that autonomous driving wasn’t a solved problem but required substantial additional research.
2005 – DARPA Grand Challenge and BigDog
DARPA Grand Challenge
DARPA offered $2 million for a similar 132-mile desert course. The difference a year made was dramatic: five vehicles completed the course, with Stanford’s “Stanley” winning in 6 hours 54 minutes (under the 10-hour limit). Carnegie Mellon’s vehicles finished second and third. This demonstrated that with focused effort, autonomous navigation in unstructured off-road environments was achievable—vehicles could perceive terrain, plan paths, avoid obstacles, and navigate for hours without intervention.
Stanley’s success relied on:
Sensor Fusion
Combining data from multiple sensor types (cameras, LIDAR, radar, GPS, inertial sensors) to build robust environmental perception that no single sensor could provide.
Machine Learning
Using machine learning to classify terrain (driveable vs. obstacle vs. uncertain) from sensor data, learning from previous driving experience to improve classification accuracy.
Probabilistic Reasoning
Representing uncertainty explicitly (is that a rock or a shadow? is that surface firm or soft sand?) and making decisions that accounted for uncertainty rather than assuming perfect knowledge.
Real-Time Performance
Processing sensor data and making control decisions at rates sufficient for vehicles traveling 30+ mph over rough terrain—requiring efficient algorithms and computational architectures.
BigDog
Perhaps Boston Dynamics’ most iconic early robot, BigDog was a quadruped roughly the size of a large dog (though weighing 240 pounds), designed to carry military equipment over rough terrain where wheeled or tracked vehicles couldn’t go. BigDog could walk, trot, climb slopes, traverse rubble, recover balance when kicked or slipping on ice, and carry 340-pound payloads—all while maintaining dynamic stability through sophisticated sensor-feedback control.
BigDog’s hydraulic actuation (powered by a gasoline engine driving hydraulic pumps) provided enormous power-to-weight ratios necessary for dynamic locomotion with heavy payloads, though the engine was notoriously loud (limiting military utility for stealth operations). The robot used onboard sensors (inertial measurement unit detecting body orientation and acceleration, joint encoders tracking leg positions, foot contact sensors) with control algorithms running at high frequency (1000 Hz) to continuously adjust leg forces and positions, maintaining balance the way animals do—through rapid reflexive adjustments rather than careful pre-planning.
BigDog demonstrated that legged robots could navigate terrain inaccessible to wheels or treads, potentially enabling operations in disaster zones, forests, mountains, and other unstructured environments. The military significance was obvious (getting supplies to troops in mountainous Afghanistan, for example), though practical deployment proved elusive due to noise, maintenance requirements, and limited autonomy (BigDog was primarily teleoperated or followed humans rather than navigating independently).
2007 – DARPA Urban Challenge
DARPA’s third challenge moved from desert to city—a 60-mile course on closed urban roads where vehicles had to follow traffic rules, handle intersections, merge into traffic, avoid both static obstacles and other moving vehicles, and park. This required even more sophisticated capabilities: understanding traffic rules, predicting other vehicles’ movements, social reasoning (yielding appropriately, not being overly timid or aggressive), and handling much higher traffic density.
Carnegie Mellon’s “Boss” won the 2007 challenge, completing the course in just over 4 hours. Boss demonstrated capabilities approaching real-world autonomous driving:
Behavioral Planning
Making strategic decisions (when to change lanes, whether to pass, when to yield) based on traffic conditions and mission objectives—not just reactive obstacle avoidance but goal-directed planning in dynamic environments.
Interaction with Other Vehicles
Successfully navigating intersections, merging, and other interactions requiring understanding and predicting other road users’ behaviors—essentially theory of mind applied to traffic.
Robustness
Handling unexpected situations (construction zones, detours, other vehicles’ unusual behaviors) without human intervention, demonstrating resilience necessary for real-world deployment.
The DARPA Challenges transformed autonomous vehicles from academic curiosity to recognized frontier technology. Many participants founded autonomous vehicle companies (Google hired Stanford and Carnegie Mellon researchers for its self-driving project; other teams formed startups like Aurora, Argo AI, nuTonomy). The challenges demonstrated that autonomous driving’s fundamental problems were solvable, attracting massive investment from tech companies and automotive manufacturers that accelerated progress toward commercial deployment.
2009 – Google Self-Driving Car Project
Google X (Google’s research lab) launched a self-driving car project led by Sebastian Thrun (Stanford professor who led the Stanley team) and Anthony Levandowski (engineer from UC Berkeley’s autonomous vehicle program). Google aimed to develop fully autonomous vehicles capable of driving without human intervention in normal conditions—a more ambitious goal than driver-assistance systems (which required humans to monitor and intervene) offered by automotive manufacturers.
The project’s approach combined:
Detailed Mapping
Pre-mapping routes in high detail (centimeter-level accuracy, including lane markings, curb heights, traffic light positions), allowing vehicles to compare sensor data against expectations to detect changes (construction, obstacles, other vehicles).
Sensor Suite
Combining LIDAR (laser range-finding producing 3D point clouds), cameras (for visual detection of traffic lights, signs, lane markings, pedestrians), radar (for detecting vehicles in poor visibility), GPS and IMU (position and orientation), creating redundant perception where multiple sensors corroborated observations.
Machine Learning
Training neural networks on millions of miles of driving data to recognize and classify objects (pedestrians, vehicles, cyclists, animals), predict behaviors (will that pedestrian cross? will that car change lanes?), and make driving decisions.
Simulation
Creating virtual environments where the autonomous system could train on millions of virtual miles, encountering rare scenarios (accidents, unusual weather, misbehaving road users) more frequently than real-world driving would provide, accelerating learning and testing safety-critical situations without physical risk.
2010 – Robonaut 2
NASA and GM jointly developed Robonaut 2 (R2), a humanoid robot designed to work alongside astronauts on the International Space Station. Unlike earlier humanoid demonstrators focused on locomotion, Robonaut emphasized dexterous manipulation—its hands (with human-like size, strength, and dexterity) could use the same tools astronauts used, operating equipment designed for humans without requiring specialized adaptations.
R2 launched to ISS in 2011, becoming the first humanoid robot in space. Its mission was assisting with routine tasks (handling equipment, performing inspections, testing samples), freeing astronauts for more complex activities. R2’s hands could grip tools, flip switches, connect electrical connections, and manipulate objects with precision approaching human capability, demonstrating that humanoid robots could serve as versatile assistants in space environments.
The humanoid form factor made particular sense for space stations designed around human body proportions—handholds, equipment placement, tool designs all assumed human dimensions and capabilities. Building robots that matched these dimensions allowed them to work in existing environments rather than requiring infrastructure modifications. R2’s development also advanced understanding of human-robot collaboration—how humans and robots could safely share workspace, coordinate tasks, and communicate intentions.
2013 – Atlas
Boston Dynamics, funded by DARPA’s Robotics Challenge program, unveiled Atlas—a 6-foot-tall, 330-pound humanoid robot representing the state of the art in dynamic humanoid locomotion. Atlas could walk over rough terrain, climb stairs, squeeze through narrow passages, and maintain balance when pushed or disturbed—capabilities enabled by sophisticated sensors (LIDAR, stereo cameras, joint position/force sensors), powerful hydraulic actuators, and advanced control algorithms that continuously adjusted joint forces to maintain stability.
Atlas served as hardware platform for the DARPA Robotics Challenge (2012-2015), where teams programmed robots to perform disaster-response tasks (opening doors, turning valves, drilling holes, driving utility vehicles, climbing stairs, clearing debris)—simulating situations like the Fukushima nuclear disaster where environments were too hazardous for humans but required human-level dexterity and mobility. The competition revealed that humanoid robotics remained extremely challenging—robots fell frequently, moved slowly, struggled with seemingly simple tasks—but also demonstrated impressive capabilities when systems worked correctly.
2015 – First Fully Driverless Ride
In October 2015, Google provided Steve Mahan, a legally blind individual, a fully autonomous ride on public roads in Austin, Texas, in a vehicle with no steering wheel or pedals—no human could intervene even if they wanted to. This demonstration proved that Level 5 autonomy (no human intervention required under any conditions) was achievable on specific routes under specific conditions, though generalizing to all conditions remained ongoing research.
2016 – Waymo Formation
Google spun the self-driving project into Waymo, an independent subsidiary of Alphabet (Google’s parent company), signaling transition from research to commercialization. Waymo began partnerships with automotive manufacturers (Chrysler Pacifica minivans, Jaguar I-Pace electric vehicles) to produce autonomous vehicles at scale rather than one-off conversions.
2018 – Atlas Parkour
Boston Dynamics released videos of Atlas performing parkour—running, jumping over obstacles, performing backflips—demonstrating dynamic agility that shocked robotics researchers and public audiences. The robot’s ability to coordinate whole-body motion, manage momentum, and execute ballistic maneuvers represented unprecedented achievement in humanoid control.
These demonstrations served multiple purposes: they showcased Boston Dynamics’ technical prowess (attracting talent, customers, and acquisition interest—Google acquired the company in 2013, later selling to SoftBank in 2017, then to Hyundai in 2021); they pushed the boundaries of what researchers believed possible with humanoid robots; and they captured public imagination, making robotics exciting and visible in ways that industrial robots couldn’t achieve.
2019 – Spot Commercial Release
While Atlas remained a research platform, Boston Dynamics commercialized Spot, a smaller quadruped robot (resembling BigDog’s more refined descendant), selling it for approximately $75,000. Spot found applications in industrial inspection (navigating factories, construction sites, and infrastructure to capture data), public safety (bomb disposal, hazmat response, search and rescue), entertainment (performing in stage productions), and research (universities and labs using Spot as a platform for developing mobile manipulation, navigation, and perception algorithms).
Spot’s success demonstrated that advanced mobile robots could transition from research to commercial viability when applications justified costs. Industrial inspection—where Spot could navigate dangerous or inaccessible areas (active construction sites, nuclear facilities, unstaffed oil rigs), collect data (thermal imaging, gas detection, visual inspection), and prevent human exposure to hazards—provided sufficient value that customers would pay premium prices for sophisticated robots.
2021 – Robotics on Mars Reaches New Heights
NASA’s fifth Mars rover (following Sojourner, Spirit, Opportunity, and Curiosity) landed in Jezero Crater on February 18, 2021, carrying the most advanced science instruments and autonomy capabilities of any planetary rover. Perseverance incorporated lessons from decades of Mars exploration:
Enhanced Autonomy
Improved self-navigation allowing the rover to cover more ground with less human intervention, detecting and avoiding hazards autonomously, planning efficient routes toward science targets.
Sample Caching
Ability to collect and seal rock samples for eventual return to Earth by future missions—enabling laboratory analysis with instruments too large and sophisticated to send to Mars, potentially detecting signs of ancient life.
Advanced Instruments
Spectrometers, cameras, ground-penetrating radar, and a small drill for collecting samples from rocks that might preserve evidence of ancient microbial life when Jezero Crater contained a lake billions of years ago.
Ingenuity Helicopter
Perhaps most remarkably, Perseverance carried Ingenuity, a 4-pound helicopter designed to demonstrate powered flight on Mars—a major technical challenge given Mars’s thin atmosphere (less than 1% of Earth’s atmospheric density, providing minimal lift).
Ingenuity completed its first flight—rising 10 feet above the Martian surface, hovering for 30 seconds, then landing successfully. This represented history’s first powered controlled flight on another world, comparable to the Wright Brothers’ 1903 achievement on Earth.
Ingenuity was designed as a technology demonstration (intended for only five flights over 30 days), but it far exceeded expectations—by 2024, it had completed over 70 flights, traveling kilometers from Perseverance, serving as aerial scout identifying interesting targets and safe routes for the rover. Ingenuity demonstrated that aerial robotics could extend planetary exploration beyond ground-based rovers, accessing areas too dangerous or difficult for wheeled vehicles (steep slopes, deep craters, scattered boulders), providing reconnaissance reducing rover navigation risk, and dramatically accelerating exploration pace.
The success inspired plans for future aerial robots on Mars, Titan (Saturn’s moon with thick atmosphere enabling efficient flight), Venus (using balloons and aircraft in upper atmosphere), and other worlds—opening new paradigms for robotic exploration where mobile platforms wouldn’t be limited to surfaces.
2022 – Tesla Optimus
In September 2022, Tesla unveiled Optimus (Tesla Bot), a humanoid robot prototype aimed at eventual mass production for general-purpose labor.
2023 – ChatGPT for Robotics
The emergence of large language models (LLMs) like ChatGPT (OpenAI, released November 2022) rapidly influenced robotics when researchers demonstrated that these models could interface with robotic systems, allowing natural language control and planning.
LLM integration represented a major shift in robotics—moving from low-level motor control and explicit programming toward higher-level reasoning, natural communication, and leveraging world knowledge learned from language. This integration suggested that future robots might combine LLM reasoning and communication capabilities with specialized robotic perception and control, creating systems that could understand human needs expressed naturally, plan and execute physical actions competently, and collaborate with humans more naturally than any previous robotic systems.
Final Thoughts
As we reflect on over two centuries of robotic development, we witness a field that has transcended its origins as mechanical novelties to become integral to human progress. Looking forward, robotics seems poised for continued rapid advancement driven by:
AI Integration
Machine learning, computer vision, natural language processing, and reasoning capabilities developed for AI generally apply directly to robotics, enabling robots to perceive, understand, and act with increasing sophistication.
Deployment Scale
As robots prove themselves in applications (autonomous vehicles, warehouse automation, domestic service), economies of scale drive cost reduction, creating virtuous cycles where lower costs enable new applications, which further reduce costs.
Biological Inspiration
Ongoing research into how biological organisms achieve robust, efficient, adaptive behavior continues inspiring new robotic approaches—soft robotics mimicking biological tissues, neuromorphic computing mimicking biological neural systems, evolutionary algorithms mimicking biological adaptation.
Human-Robot Symbiosis
Rather than robots replacing humans entirely, many applications involve humans and robots collaborating—combining human flexibility, judgment, and creativity with robotic precision, endurance, and consistency. This complementarity suggests futures where humans and robots work together rather than competing.
Where this journey leads—whether toward Karel ÄŚapek’s dystopian vision of robot uprising, Isaac Asimov’s optimistic vision of beneficial robot assistants, or something entirely unforeseen—remains to be written. What’s certain is that robots will continue shaping human civilization as profoundly as any technology in our history.
Thanks for reading!
References
[1] History of robots – Wikipedia – https://en.wikipedia.org/wiki/History_of_robots
[2] History of industrial robots: Complete timeline from 1930s – https://www.autodesk.com/design-make/articles/history-of-industrial-robots
[3] 13 Milestones in the History of Robotics | Aventine – https://www.aventine.org/robotics/history-of-robotics/
[4] Difference engine – Wikipedia – https://en.wikipedia.org/wiki/Difference_engine
[5] The Engines | Babbage Engine | Computer History Museum – https://www.computerhistory.org/babbage/engines/
[6] Analytical engine – Wikipedia – https://en.wikipedia.org/wiki/Analytical_engine
[7] Difference Engine | Calculating Machine, Charles Babbage, 19th Century | Britannica – https://www.britannica.com/technology/Difference-Engine
[8] Charles Babbage’s Difference Engines and the Science Museum | Science Museum – https://www.sciencemuseum.org.uk/objects-and-stories/charles-babbages-difference-engines-and-science-museum
[9] R.U.R. – Wikipedia – https://en.wikipedia.org/wiki/R.U.R.
[10] R.U.R. (Rossum’s Universal Robots) (Penguin Classics): Capek, Karel, Novack-Jones, Claudia, Klima, Ivan: 9780141182087: Amazon.com: Books – https://www.amazon.com/R-U-R-Rossums-Universal-Penguin-Classics/dp/0141182083
[11] The Project Gutenberg eBook of R. U. R. (Rossum’s Universal Robots), by Karel Capek. – https://www.gutenberg.org/files/59112/59112-h/59112-h.htm
[12] “R.U.R.” by Karel Capek – https://commons.erau.edu/oer-main/61/
[13] R.U.R. | Robot, Automation, Science Fiction | Britannica – https://www.britannica.com/topic/RUR
[14] The Czech Play That Gave Us the Word ‘Robot’ | The MIT Press Reader – https://thereader.mitpress.mit.edu/origin-word-robot-rur/
[15] Joseph Engelberger and Unimate: Pioneering the Robotics Revolution – https://www.automate.org/robotics/engelberger/joseph-engelberger-unimate
[16] NIHF Inductee George Devol Invented the Industrial Robot – https://www.invent.org/inductees/george-devol
[17] In 1961, the First Robot Arm Punched In – IEEE Spectrum – https://spectrum.ieee.org/unimation-robot
[18] George Devol – Wikipedia – https://en.wikipedia.org/wiki/George_Devol
[19] George Devol Invents Unimate, the First Industrial Robot : History of Information – https://www.historyofinformation.com/detail.php?entryid=4071
[20] The Invention of the Industrial Robot | National Inventors Hall of Fame® – https://www.invent.org/blog/inventors/George-Devol-Industrial-Robot
[21] Unimate – Wikipedia – https://en.wikipedia.org/wiki/Unimate
[22] Joseph Engelberger – Wikipedia – https://en.wikipedia.org/wiki/Joseph_Engelberger
[23] Unimation – Wikipedia – https://en.wikipedia.org/wiki/Unimation
[24] Shakey the robot – Wikipedia – https://en.wikipedia.org/wiki/Shakey_the_robot
[25] Shakey the Robot – SRI – https://www.sri.com/hoi/shakey-the-robot/
[26] SRI’s Pioneering Mobile Robot Shakey Honored as IEEE Milestone – IEEE Spectrum – https://spectrum.ieee.org/sri-shakey-robot-honored-as-ieee-milestone
[27] Shakey – CHM Revolution – https://www.computerhistory.org/revolution/artificial-intelligence-robotics/13/289
[28] Shakey – Artificial Intelligence Center – SRI International – http://www.ai.sri.com/shakey/
[29] Milestones:SHAKEY: The World’s First Mobile Intelligent Robot, 1972 – Engineering and Technology History Wiki – https://ethw.org/Milestones:SHAKEY:_The_World%E2%80%99s_First_Mobile_Intelligent_Robot,_1972
[30] Robot – http://infolab.stanford.edu/pub/voy/museum/pictures/display/1-Robot.htm
[31] Stanford arm – Wikipedia – https://en.wikipedia.org/wiki/Stanford_arm
[32] Victor Scheinman – Wikipedia – https://en.wikipedia.org/wiki/Victor_Scheinman
[33] Stanford’s Robotic History | STANFORD magazine – https://stanfordmag.org/contents/stanford-s-robotic-history
[34] Programmable Universal Machine for Assembly – Wikipedia – https://en.wikipedia.org/wiki/Programmable_Universal_Machine_for_Assembly
[35] A Brief History of Robotics since 1950 | Encyclopedia.com – https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/brief-history-robotics-1950
[36] Roomba – Wikipedia – https://en.wikipedia.org/wiki/Roomba
[37] Roomba® Robot Vacuum Cleaners | iRobot® – https://www.irobot.com/en_US/roomba.html
[38] Roomba Robot Vacuum Cleaner | National Museum of American History – https://americanhistory.si.edu/collections/object/nmah_1448432
[39] Roomba – ROBOTS: Your Guide to the World of Robotics – https://robotsguide.com/robots/roomba
[40] When Was the Roomba Made: A Brief History of the World’s Most Popular Robot Vacuum Cleaner – Home Cleaning Stuff – https://homecleaningstuff.com/when-was-the-roomba-made/
[41] BigDog – Wikipedia – https://en.wikipedia.org/wiki/BigDog
[42] Boston Dynamics – Wikipedia – https://en.wikipedia.org/wiki/Boston_Dynamics
[43] Legacy Robots | Boston Dynamics – https://bostondynamics.com/legacy/
[44] BigDog – ROBOTS: Your Guide to the World of Robotics – https://robotsguide.com/robots/bigdog
[45] Spot | Boston Dynamics – https://bostondynamics.com/products/spot/
[46] da Vinci Surgical System – Wikipedia – https://en.wikipedia.org/wiki/Da_Vinci_Surgical_System
[47] Intuitive Surgical’s da Vinci Surgical System Receives First FDA Cardiac Clearance for Mitral Valve Repair Surgery | Intuitive Surgical – https://isrg.intuitive.com/news-releases/news-release-details/intuitive-surgicals-da-vinci-surgical-system-receives-first-fda/
[48] Robotic Surgery: Applications – PMC – https://pmc.ncbi.nlm.nih.gov/articles/PMC4615607/
[49] Waymo – Wikipedia – https://en.wikipedia.org/wiki/Waymo
[50] Autonomous Driving Technology – Learn more about us – Waymo – https://waymo.com/about/
[51] Waymo – A Google X Moonshot – https://x.company/projects/waymo/
[52] How Google’s Self-Driving Car Will Change Everything – https://www.investopedia.com/articles/investing/052014/how-googles-selfdriving-car-will-change-everything.asp
[53] Waymo – ROBOTS: Your Guide to the World of Robotics – https://robotsguide.com/robots/waymo
[54] Tesla’s Remote Control Patent – https://patents.google.com/patent/US613809A/en
[55] Televox Robot History – https://history-computer.com/televox-robot/
[56] Eric Robot – Science Museum Group – https://collection.sciencemuseumgroup.org.uk/objects/co8084695/eric-robot
[57] Elektro the Moto-Man – https://www.thehenryford.org/collections-and-research/digital-resources/popular-topics/elektro/
[58] Isaac Asimov’s Robot Stories – https://www.asimovonline.com/asimov_FAQ.html#series13
[59] William Grey Walter’s Tortoises – https://www.bristol.ac.uk/news/2008/212.html
[60] Lunokhod 1 – NASA – https://nssdc.gsfc.nasa.gov/nmc/spacecraft/display.action?id=1970-095A
[61] KUKA History – https://www.kuka.com/en-de/about-kuka/history
[62] Cincinnati Milacron T3 – https://www.robotics.org/joseph-engelberger/milacron.cfm
[63] Viking Mission to Mars – https://mars.nasa.gov/mars-exploration/missions/viking-1-2/
[64] Stanford Cart – https://www.computerhistory.org/collections/catalog/102738912
[65] Direct Drive Arm – CMU Robotics Institute – https://www.ri.cmu.edu/robot/direct-drive-arm/
[66] HelpMate Robotics History – https://www.roboticsbusinessreview.com/health-medical/helpmate-robotics/
[67] PUMA 560 Surgery – https://pubmed.ncbi.nlm.nih.gov/3884922/
[68] Honda ASIMO History – https://global.honda/innovation/robotics/ASIMO.html
[69] Rodney Brooks Papers – MIT – https://people.csail.mit.edu/brooks/papers/fast-cheap.pdf
[70] Dante Volcano Explorer – https://www.ri.cmu.edu/robot/dante-ii/
[71] Mars Pathfinder/Sojourner – https://mars.nasa.gov/mars-exploration/missions/pathfinder/
[72] LEGO Mindstorms History – https://www.lego.com/en-us/themes/mindstorms/about
[73] Sony AIBO – https://us.aibo.com/feature/history.html
[74] Mars Exploration Rovers – https://mars.nasa.gov/mars-exploration/missions/mars-exploration-rovers/
[75] DARPA Grand Challenge 2004 – https://archive.darpa.mil/grandchallenge04/
[76] DARPA Urban Challenge 2007 – https://archive.darpa.mil/grandchallenge/
[77] Robonaut 2 – NASA – https://robonaut.jsc.nasa.gov/R2/
[78] IBM Watson Jeopardy – https://www.ibm.com/ibm/history/ibm100/us/en/icons/watson/
[79] Baxter Robot – Rethink Robotics – https://www.rethinkrobotics.com/baxter/
[80] Boston Dynamics Atlas – https://www.bostondynamics.com/atlas
[81] Google Robotics Acquisitions – https://www.theverge.com/2013/12/14/5209622/google-has-bought-boston-dynamics-makers-of-big-dog
[82] Harvard Soft Robotics – https://wyss.harvard.edu/technology/soft-robotics/
[83] Sophia Robot Citizenship – https://www.hansonrobotics.com/sophia-2/
[84] Atlas Parkour – Boston Dynamics – https://www.youtube.com/watch?v=LikxFZZO2sk
[85] Robots and COVID-19 – Science Robotics – https://www.science.org/doi/10.1126/scirobotics.abb5589
[86] Perseverance and Ingenuity – NASA – https://mars.nasa.gov/mars2020/
[87] Tesla Optimus – https://www.tesla.com/AI
[88] ChatGPT Robotics – Microsoft Research – https://www.microsoft.com/en-us/research/group/autonomous-systems-group-robotics/articles/chatgpt-for-robotics/
[89] Waymo Expansion 2024 – https://blog.waymo.com/