Showing posts with label technological singularity. Show all posts
Showing posts with label technological singularity. Show all posts

Friday, June 24, 2016

A Fire Upon the Deep (a book review post)

Sometimes, there are those books that you really just expect to love. Or, at least, to like a lot. They have all the things that you generally like in books. Or, at least, for that type of book. By all accounts, I should have really liked A Fire Upon the Deep, but it never really took off for me. You could say that it got stuck in the slow zone, especially considering it took me something like six months to read.

It's not that I hated the book, but it never achieved likability. It was like one of those foods you're willing to eat to be polite but, really, you'd just rather not. Like asparagus.

The first issue was the characters. Not that they were the first issue, exactly, but you can forgive a lot of stupid stuff in books (or on TV or whatever) if the characters are good. Good meaning that you can relate to them in some way and empathize with their situation. Or negatively empathize (as with an antagonist). But this book had zero characters with whom I could connect, so I never came to care about what happened to any of them. The only character I came even close to liking got killed not long after he was introduced.

And it was difficult to actually dislike the "villain," since it amounted to no more than a program.

So not only did I not have anyone to root for, I also didn't have anyone to root against, so there was nothing really compelling within the story to keep me wanting to read it.

Some would say the book is about the world building, which is extensive, but I didn't find that appealing, either. There were too many things that I found, well, just dumb. Like the galaxy having "zones." Not zones like you'd have on a map, but zones like the layers of a rain forest: floor, understory, canopy, emergent. The problem with these zones in the book (unthinking depths, slow, beyond, transcend) is that they were represented somewhat like evolutionary stages. Earth is in the slow zone but, once man had evolved enough, he moved up to the beyond, except it's the technology that's evolving, not man.

The problem with all of that is that the technology cannot actually physically exist (work) in an incorrect zone. Imagine it like this: You grow up in a kind of rundown neighborhood and all you have is a bike, but you work hard and save and, eventually, move to a better neighborhood and buy a house and a car. Let's say that one day you want to visit your old home, so you decide to drive by and see it just for the sake of nostalgia. The only problem is that when your car enters the old neighborhood, it quits working. It just shuts off. You could still put it in neutral and push it around the streets, but that would take a very long time and be a lot of hard work. Or you could build a bicycle mechanism into your car that you could switch to when your engine cut off.

I don't find this kind of thing fascinating to ponder, not in any way. It's a ridiculous approach to physics and the universe.

And not to spoil the ending, and I'm not actually going to tell you what happens, but you shouldn't read the next bit if you want to remain in the unawares:

It has a totally deus ex machina ending. That's not a problem in-and-of itself, because you know from the beginning, basically, that that's what they're looking for. However, when it happens, it goes all in and offers absolutely no explanation. The ending just happens. They show up and everything that is going to happen happens without them doing anything other than being there. And that, also, doesn't quite make sense, but there is no explanation offered. It was unsatisfying, to say the least.

And that was after the six-month slog to read it.

Probably, I will go ahead and read the next one, A Deepness in the Sky, but that's because I already have it. If I didn't, I wouldn't bother. And when I say I'm going to read it, I only mean that I'm going to start it. If it's not better, I'm not going to force myself through it like I did this one.

Note: This is one of the author's I decided to explore several years ago when I was doing the fiction to science thing for a-to-z. Vernor Vinge came up with the idea of the technological singularity, and these books deal with that. So, yeah, sometimes I read a book I'm not enjoying so that I can understand the impact of it (like Snow Crash, which was horrible, but, after reading it, I understand why people became so enthralled with it (hint: It wasn't the writing)). Trust me, it's not some weird sadomasochistic reading urge. I just want to understand the cultural significance of the thing.

Friday, July 31, 2015

Physics of the Future (a book review post)

For me, Physics of the Future was a bit of a research project. I have a couple of different sci-fi things in various stages, and I wanted to see how this stuff lined up with what I'm doing. As it turns out, pretty well. Although, I have to say, I do disagree with a few things, not that I'm the expert, though. Kaku is the physicist. However, I think the idea of a "space elevator" is a fantasy, and I don't really understand why people cling to it so hard.

Having said that, I do know that it's fantasies (ideas) that turn a lot of "science fiction" into plain old science. I did, after all, do a whole series on that during A-to-Z a few years ago.

But I digress...

So the premise of the book is that Michio Kaku, a theoretical physicist, would look into the actual science being developed today and, based on past progress in developments, make a projection (prediction) about the kinds of things we can see in the future. Assuming we, as a race, live long enough to see those things come to fruition. And, yes, he talks about that "if," too.

For me, Kaku spent too much time dwelling on the future of medicine. Not only does medicine get its own chapter (chapter three), but it's laced throughout the book. I get it. I do. People are concerned with medical advances that can allow them to live how they want to live with no negative consequences, and, actually, some of the research currently underway might make that possible. It is entirely possible that my generation will be the last generation to die and that the next generation (my kids) could have potentially unlimited lifespans. There's even an outside chance that some of those developments could happen before the end of my generation, but that would require a remarkable breakthrough and, still, probably only be available to the fabulously wealthy. Kaku is considerably older than me, so I can understand the focus. Still, he covered the same ground about early cancer identification at least half a dozen times.

The other thing he spent too much time on was magnetism. Kaku seems quite enamored of the idea of telepathically controlling the environment through the use of superconductors, and he refers to this a lot during the course of the book (much like the nanomachines which will detect cancer). The problem is that this relies on the accidental discovery of something which may not actually exist. Our current generation of superconductors weren't developed, they were happened upon, and he bases much of his magnetism predictions on serendipity.

He also seems to be overly optimistic about the future of mankind, at least from my perspective. He spends a considerable amount of time explaining why the "singularity" won't happen or, if it does, why we'll be able to control it. He makes a point about how, one day, the most sought after thing on the Internet will be wisdom, this after stating how humans are essentially the same as they've been since we became human. He expects ranting bloggers and funny cats to disappear as we all become enlightened, and I think he's been watching too much Star Trek. And that he doesn't really know humans very well if he thinks we (as a group) will give up funny cat videos. And blogger rants.

However, all of that said, the book is fascinating. The technology discussions are fascinating. And the chapter on the future of wealth is extremely fascinating. The unstated comparison of the US to the Ottoman Empire is especially compelling. Nutshell: At one point, the Ottoman Empire led the world in science... until it gave all of that up to embrace religious fundamentalism. Let me re-state that: At least 50% of America's leading scientists have come from other countries and more and more of them are, instead of staying here, returning to those countries after they've received their education. America, because of the deplorable state of public education, is not producing sufficiently educated people of science. It's not our focus anymore.

If you're at all interested in the book, now is the time to read it. Only four years away from publication, and parts of it are already becoming outdated. The section on self-driving cars is a good example. Current projections are that self-driving cars will be as common as smart phones within the next decade; Kaku doesn't really expect them to start even showing up until around 2030. He makes no mention of quantum communication and only mentions quantum computers as an unlikely option. IBM has just developed a computer chip that could completely change the computer industry. Warp fields have been created, too, another bit of science Kaku glosses over as being the least likely of options.

Still, it's fascinating. Even the stuff about the space elevator, but that's mostly because he spends time talking about carbon nanotubes during that part, and carbon nanotubes, if we can figure out how to make them long enough, are another technology that could completely change the world.

Of course, the drawback, even though Kaku has made it very accessible, is that it's very heavy on science. Well, it's all science, so I can see it being difficult for some people to get into. For whatever reason. But, you know, if you're writing any kind of science fiction, right now, this might be a book you want to have on your desk.

Thursday, April 19, 2012

The A to Z of Fiction to Reality: Robots and Androids

Finally, we arrive to it: robots. So many of these fiction to reality posts have touched on robots or things robotic that I considered just skipping robots entirely, but, for some, that might be tantamount to ending a book just short of the climax and never finishing it. At any rate, robots have been, in many ways, what's driving this series of posts, so it wouldn't be exactly fair to leave them out, and I don't want any self-aware robots coming and asking me why I'd disrespect them in such a way. This post is also going to expand on my artificial intelligence post, so you might want to go back and read that one before going on with this one if you haven't already read it.

In many ways, the quest to develop or invent an "artificial man" has been as ongoing as the quest for flight throughout human history. These ideas extend back into myth and legend, and, as with flight, even Leonardo da Vinci had a design for a mechanical man. Maybe he even tried to build it. Instead of wading through all of that stuff, though, I'm going to jump ahead to our more modern view of what a robot is... except that we don't have a definitive view of what a robot is.

To facilitate the conversation, I'm going to define a robot as an electro-mechanical machine that has the semblance of intelligent behavior. These electro-mechanical machines can range from autonomous to remote controlled. This definition leaves out clockwork machines (which many people would like to say are the first examples of robots, but, then, that would, technically, make a clock a robot, and I'm not willing to go there).

Having said that, I will, however, go with Tik-Tok from Ozma of Oz as the first example of a modern robot in literature. Even though he was a clockwork, he was self aware and self motivating, making him a clockwork robot, not just a clockwork that looked like a man. It would be 15 years after the introduction of Tik-Tok before the word robot would be coined.
Tik-Tok
Speaking of, the term robot was first introduced in 1920 in a play, Rossum's Universal Robots, by Karel Capek. The word, basically, means drudgery, as that is the kind of work the robots in the play did. It doesn't end well for humanity.

As the 20th century progressed, robots became more and more common in fiction:
And, perhaps, the most famous robots ever (okay, not perhaps; we all know they are):

However, of all the fictional appearances of robots, it is probably Isaac Asimov's robot short stories and novels that have been the most significant, not least of all for his Three Laws of Robotics.

Surprisingly enough (at least to me), the first electronic robots were built in 1948 and 1949, Elmer and Elsie. The first truly modern robot was invented in 1954, the Unimate, by George Devol. He sold it to General Motors in 1960, and its installation began the modern robotics industry.

And this is where things get complicated. Complicated because the quest has always been to build an artificial human, not a mechanical arm, which is what the Unimate was. And for the last 50 years, that's what we've been trying to do. We've been trying to build the specific form of a robot that we call an android, which is what Asimov writes about, even if that's not what he calls them. But it is what we call R2-D2 and C-3PO -- droids. And here  is where we are today:
This is TOPIO 3.0, an android designed to play ping pong. He uses an advanced AI (artificial intelligence) that allows him to learn and improve his skill while playing. Basically, he adapts to the person he's playing against, learns how that person plays and adopts a strategy to beat the opponent. You can learn more about TOPIO here; although, I don't see a record of his wins and losses listed.

This is an Actroid, the most sophisticated android currently "alive." The newest model is named Sara
You can watch her explain how she works.

So... we're not quite to self aware, self motivating robots and androids, but we are stepping in that direction. In fact, robots are one of the biggest driving forces in AI research. Science fiction author Vernor Vinge (A Fire Upon the Deep, A Deepness in the Sky) believes we are heading toward a "technological singularity" (a term he coined) in which we will technologically develop a greater-than-human intelligence. Because we cannot comprehend the kinds of changes that will occur after such an intelligence is created, he calls this an "intellectual event horizon." With all the research and development in quantum computing and quantum nodes, I have a hard time thinking he's wrong. [My friend Rusty (who drew this picture of me) has been going on about Vinge for some time, now, and, so far, I haven't read anything by him. Not because I haven't wanted to, but because I'm way behind on my reading and haven't wanted to try to work anything new into the stack until I cut it down some; however, after reading this stuff, I'm going to have to work Vinge in.] It's not that Vinge is the only person to have written about these themes; we see them in science fiction a lot, usually with a very negative spin on it (the Terminator franchise, the Matrix trilogy), but he is the first to state his view so concisely, and this idea permeates much of his work. It will certainly be interesting to see how the future progresses in regards to artificial intelligence and robots!


The Three Laws of Robotics:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.