Tuesday, May 8, 2012

Human, Transhuman, Posthuman (Part 1)

Last weekend I attended the "Humanity+" conference in Melbourne (http://hplusconf.com.au), held at RMIT.  It consisted of an eclectic mix of presentations by invited speakers, without contributed papers or a published proceedings, though videos of the talks will become available.  The conference was under the auspices of the Humanity+ organisation (http://humanityplus.org), whose aim is to promote thinking about the "next steps" of humanity.  The main areas of focus appear to be biomedical and bioengineering developments for longer and healthier life, leading on to enhancements of the body, and artificial intelligence and enhancements of the mind.  The chair of Humanity+, Natasha Vita-More, was one of the presenters.  I went because I thought the gerontologist Aubrey de Grey would be worth hearing, and because the artist Stelarc was giving a presentation. 

This conference was more optimistic than pessimistic.  Climate change and population pressures were there in the background, and sustainability was a theme, but on the whole the intent was to look beyond these problems to longer-term possible futures for humanity.  The organiser was Adam Ford, who has just become a board member of Humanity+, and who has had a considerable involvement in this general area.

Maybe 80 people attended, predominantly but by no means exclusively male, and a mixture of young and old, with relatively few people in the middle age range.  I got the impression that almost everyone there had a background in science, engineering or computing.

Aubrey de Grey was well worth hearing.  His view on ageing is that normal metabolic processes produce "damage" of various kinds, such as junk inside cells that the body cannot dissolve.  We can tolerate a certain amount of such damage, but eventually it starts to harm us.  De Grey listed all the classes of damage that are known (and indicated that no fundamentally new classes of damage had come to light in the last 30 years), and indicated plausible approaches to dealing with all of them.  He mentioned two specific projects at his laboratory dealing with junk inside the cell, targeted at macular degeneration, which is a leading cause of blindness, and at atherosclerosis, which is inflammation of the walls of the arteries, heading to heart disease and strokes.

All of this comes under the heading of "regenerative medicine", therapies to rejuvenate (that is, to make young again) systems in the body by clearing out damage and taking the bodily systems back some way towards the healthy young adult state.  Once such therapies are in place for all the major types of damage (which is quite a few years away), de Grey thinks that we will be able to have another 30 years of healthy middle age.  These days 60 is the new 50; with these therapies 80 or 90 would be the new 50.  But that is only the start.   As techniques improve, clearing out a greater proportion of damage, repeated rejuvenation would allow enormous prolongation of healthy, active life, to ultimately maybe 1,000 years.  This doesn't imply a cure for cancer, but it does imply a method of avoiding cancer by manipulating telomeres (the caps at the end of DNA strands).

All of this provoked a lot of discussion, and de Grey devoted his second presentation to discussing objections to his program.  The diseases of ageing are not just a first world problem: de Grey said that already two-thirds of the deaths in the world are due to them.  Of course if we do have the potential to live to 1,000 years there will have to be massive changes in society, but de Grey pointed out that by the time such long life becomes feasible there will have been massive changes in society anyway.

Incidentally de Grey is not a food faddist or anything of that sort.  He was asked about diet, and said that as long as one is reasonably sensible about diet and exercise (and doesn't smoke), things like the "paleo diet" and the like don't achieve much.  And he enjoyed a beer at the pub afterwards.

The other presentation that contained a road map for future developments was that of Tim Josling on artificial intelligence.  He outlined the so-called hype cycle that tends to apply to new technologies.  Once a new technology becomes known, at first there is a great deal of hype, resulting in wildly inflated expectations.  When the technology doesn't live up to these, there is a "trough of disillusionment", and then after than attitudes to the technology finally settle to a realistic view of what it can achieve.

Artificial intelligence (AI) went through this cycle: after quite a long initial period of hype the "AI winter" descended in the 1980s, when funding dried up and AI was generally regarded to have failed.  In fact it developed quietly in various specialised areas.  Josling listed several techniques developed years or decades ago that were impractical at the time but are now coming in to their own as increased computer power has made them feasible.  Incidentally Josling is more optimistic about the continuation of Moore's Law (that the number of transistors on a chip doubles every two years) than Herb Sutter (whom I mentioned in a previous post), but it doesn't matter for Josling's argument whether increased computing power arrives via Moore's Law in one box or via networks, as Sutter expects.

Josling expects that more and more low-level white-collar jobs will be cheaper to do by machines, on a relatively short time frame, and he ended by posing the question: "Leisured aristocracy or unemployed underclass?"

This sort of prophecy was made in my youth, and hasn't really come to pass.  However, the "acceptable" minimum rate of unemployment has risen from 2% to 5% in my lifetime, and since the official figures are constructed to be as low as possible, the true unemployment figure is at least 10%.  I also think that the availability of cheap Third World workers has delayed the development of automation, but that is beginning to come to an end.  Eventually the machines will be cheaper than even a Third World worker.

In the background of Josling's presentation is a concept known as "The Singularity", and there was a panel discussion around this at the conference.  The Singularity is when machines become smarter than we are; this may be a long way off, but it is hard to argue convincingly that it can never happen.  The Singularity is a sort of "event horizon", as we cannot predict what would happen after that.  As far as raw processing power is concerned, by one estimate a current desktop machine with a good graphics card has maybe 1/2000 of the raw power of a human brain.  Networks of 2000 such machines already exist.  Though one of the panellists, Colin Hales, indicated that recent discoveries have indicated that the brain may have far more power than the above estimate implies.

The work up until now has been in specialised domains, for example making driverless trucks for mining sites.  There was mention of a possible approach to general artificial intelligence being pioneered by Marcus Hutter at the Australian National University.  Josling indicated that the promising advances in artificial intelligence involve various forms of machine learning (and I got the impression that this applies to Hutter's work); this led into a discussion of risks.  If a machine has learnt from experience rather than being explicitly programmed (and this already happens in some areas) then we don't know in detail how it does what it does.  If it does something unexpected and kills or injures someone, it is not at all clear who should be held accountable.  One of the attendees, who works as a safety engineer (I didn't catch his name) said that once a technology such as that for driverless trucks is mature, it is more reliable than having human drivers; it is the early period of introduction of such technologies that is really dangerous.  In this context, the Google Car has driven itself autonomously around Los Angeles.  One of the panellists, James Newton-Thomas, who works with autonomous mining equipment, indicated that the current approach is to segregate the equipment behind physical barriers, as well as fitting independent safety systems.

A discussion that was only touched on at the conference was how to make sure that a super-intelligent machine would be friendly towards us, and there was some discussion about the relationship among consciousness, intelligence and morality.  There was also some discussion about the uses to which governments and large corporations would put super-intelligent machines.  The prospect of large-scale technological unemployment and the thought-police-like powers already available via automated surveillance and data mining are much more immediate concerns.

(To be continued...)

No comments:

Post a Comment

Add to Google Reader or Homepage
Subscribe in Bloglines