In many ways, the quest to develop or invent an "artificial man" has been as ongoing as the quest for flight throughout human history. These ideas extend back into myth and legend, and, as with flight, even Leonardo da Vinci had a design for a mechanical man. Maybe he even tried to build it. Instead of wading through all of that stuff, though, I'm going to jump ahead to our more modern view of what a robot is... except that we don't have a definitive view of what a robot is.
To facilitate the conversation, I'm going to define a robot as an electro-mechanical machine that has the semblance of intelligent behavior. These electro-mechanical machines can range from autonomous to remote controlled. This definition leaves out clockwork machines (which many people would like to say are the first examples of robots, but, then, that would, technically, make a clock a robot, and I'm not willing to go there).
Having said that, I will, however, go with Tik-Tok from Ozma of Oz as the first example of a modern robot in literature. Even though he was a clockwork, he was self aware and self motivating, making him a clockwork robot, not just a clockwork that looked like a man. It would be 15 years after the introduction of Tik-Tok before the word robot would be coined.
Speaking of, the term robot was first introduced in 1920 in a play, Rossum's Universal Robots, by Karel Capek. The word, basically, means drudgery, as that is the kind of work the robots in the play did. It doesn't end well for humanity.
As the 20th century progressed, robots became more and more common in fiction:
Surprisingly enough (at least to me), the first electronic robots were built in 1948 and 1949, Elmer and Elsie. The first truly modern robot was invented in 1954, the Unimate, by George Devol. He sold it to General Motors in 1960, and its installation began the modern robotics industry.
And this is where things get complicated. Complicated because the quest has always been to build an artificial human, not a mechanical arm, which is what the Unimate was. And for the last 50 years, that's what we've been trying to do. We've been trying to build the specific form of a robot that we call an android, which is what Asimov writes about, even if that's not what he calls them. But it is what we call R2-D2 and C-3PO -- droids. And here is where we are today:
here; although, I don't see a record of his wins and losses listed.
You can watch her explain how she works.
So... we're not quite to self aware, self motivating robots and androids, but we are stepping in that direction. In fact, robots are one of the biggest driving forces in AI research. Science fiction author Vernor Vinge (A Fire Upon the Deep, A Deepness in the Sky) believes we are heading toward a "technological singularity" (a term he coined) in which we will technologically develop a greater-than-human intelligence. Because we cannot comprehend the kinds of changes that will occur after such an intelligence is created, he calls this an "intellectual event horizon." With all the research and development in quantum computing and quantum nodes, I have a hard time thinking he's wrong. [My friend Rusty (who drew this picture of me) has been going on about Vinge for some time, now, and, so far, I haven't read anything by him. Not because I haven't wanted to, but because I'm way behind on my reading and haven't wanted to try to work anything new into the stack until I cut it down some; however, after reading this stuff, I'm going to have to work Vinge in.] It's not that Vinge is the only person to have written about these themes; we see them in science fiction a lot, usually with a very negative spin on it (the Terminator franchise, the Matrix trilogy), but he is the first to state his view so concisely, and this idea permeates much of his work. It will certainly be interesting to see how the future progresses in regards to artificial intelligence and robots!
The Three Laws of Robotics:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.