Friday, May 11, 2018

Robot ethics



A couple of days ago, Sundar Pichai, the CEO of Google, got the kind of reaction that every hi-tech visionary craves when s/he demonstrates cool new technology.   But amid the clamor of oohs and aahs and whoops and hollers, we can just barely discern the uneasy murmur of ethics qualms.

For several years, Android smart phones have had Google Assistant, which is Google's version of Siri.  Ask it a question and it will (usually) answer.  I use it quite a bit for driving, not only for directions but to find the fastest route to familiar destinations; and beyond automobile navigation, it can find and present all sorts of information.  It's essentially the realization of the Star Trek "Computer".

Earlier this week, Pichai demo'd a new feature of Google Assistant, called Google Duplex.  Duplex can do more than respond to simple commands; it can hold actual conversations.  Pichai demonstrates two of them here - please click on the video and listen, if you're able to do so.

In the demos, Duplex makes a hair cut appointment, and then discusses the possibility of a restaurant reservation with a host for whom English is not her first language.  Even seemingly simple and brief conversations like those two can offer surprising complexity and subtlety, and Duplex seems to navigate them perfectly.

It's not only impressive, it's more than a little spooky how realistically the robot voice has been engineered.  There is no trace of "robotics" about it.  It inserts "er's" and "mm-hmms" at natural-sounding places, sprinkles in a little contemporary slang, and utilizes inflections that make it pretty much indistinguishable from the voice of a real person.

Is this capability an unalloyed good?  Matt O'Brien of the Associated Press raises some interesting questions:
Is it fair – or even legal – to trick people into talking to an AI system that effectively records all of its conversations? And while Google's demonstration highlighted the benign uses of conversational robots, what happens when spammers and scammers get hold of them?
Those strike me as a couple of pretty valid concerns.

Let me add this into the mix as well: O'Brien's questions are raised from the point of view of consumer rights, and that's a legitimate area of business ethics.  But let's also consider the impact on workers.  Whether it was inadvertent or clever, Pichai's two demonstrations were of Google Duplex making appointments on behalf of consumers.  But judging by the state of the technology, it seems likely enough that the roles in the two demo phone calls could be reversed: a human could be calling a robot receptionist or a robot restaurant host which also would manage the business's calendar and work assignments.  If it's cost-justifiable, that may allow these small businesses to fire their receptionists and hosts. 

That wouldn't be new news; technology and robots have been replacing human workers for years now.   But as jobs like supermarket cashier, service station attendant, bank teller and retail sales clerk have been made ever more obsolete, there aren't many possibilities left for people without a college degree or specialized training.  Customer service has been one of the few roles that continue to hire these workers.  I would say those days are numbered, too.  And taxi and Uber driver will be next as the self-driving autos continue to improve.

New tech is pretty cool, there is no doubt about it.  And lucrative: not only is it enriching young entrepreneurial types like Pichai, but it's transforming colleges; one commentator observed recently that the trend in higher education is that "universities are large STEM research centers with small liberal arts colleges attached to them."  But as O'Brien demonstrates, the dusty, fusty liberal arts have a role to play in contemporary society - at least until the robots start making ethical decisions for us, too.

19 comments:

  1. I remember reading Isaac Asimov's Robot trilogy. The robots were more likable than some of the people. People even developed romantic attachments to some of them. Don't know if any of you saw the movie Passengers. The bartender Arthur was a robot. The two human protagonists fought over who got to spend time with him; he was the closest thing they had to a counselor. That stuff can tota!ly mess with your mind. I have caught myself saying "thank you" to Alexa. If AI becomes more human, do we become less human?

    ReplyDelete
  2. Any one see "Robot and Frank"? AI robot doing elder care. Frank (Langella) induces the robot to assist his criminal enterprises. Even though the robots are equipped with some sense of morality, they are guileless and limited in their ability to react in certain situations.

    I think talking to Duplex would be more pleasant than getting a recorded message about store hours, but a biz could not control the interaction. I can see any number of problems that could arise.

    Or if you have ever watched a bunch of teenage boys play with Siri for 10 minutes, you can imagine just how much fun they might have sabotaging Duplex responses to customers.

    ReplyDelete
  3. Not a big fan of AI in my life. However, I find the possibility of anti-AI AI interesting. For instance, can one load down an AI robocaller with an AI answering app that would run it in circles? I will never make an appointment using an AI. Rude. Unless one is handicapped.

    ReplyDelete
  4. Every technological device is essentially a tool. Its function os to do some set of jobs. The basic criterion for evaluating some tool is to consider how well it deals with the jobs in question. Relevant considerations are: speed, safety, cost, ease of use, maintenance, etc. All of these considerations have to do with getting some job or jobs done.
    None of these considerations deal with questions like: Should this Job be done" Should this person or these persons do these jobs? Under what circumstances is it appropriate to do these jobs?
    This latter set of questions all are related to the overall issue of justice. They are in principle not technological questions. No technological development can either answer justice questions or render them irrelevant.
    What it takes reasonably and convincingly to answer these justice questions connected with new technological developments frequently require far from easy work by appropriately educated people working in consultation with one another. Technology experts are necessary but not sufficient. They either have either to acquire well formed commitments to the demands of justice or heed the advice of those who do have such well formed commitments.
    In sum, to deal with many of the new technological capabilities require exercises of practical wisdom on the part of some widely representative
    group of thoughtful citizens united in an ongoing effort to submit the development and/or the deployment of such technology to the constraints that justice demands.
    As one of my sons has said to me, this issue is urgent, because our society's "ethical horses" are being put out to pasture at a very rapid rate.

    ReplyDelete
    Replies
    1. Bernard - it seems to me that most of us aren't especially reflective concerning ethics. Absent a unifying set of ethical principles, I think most of us revert to marketplace considerations, especially in our consumer behavior: "Does it make my life easier? Can I afford it? Will it make me any money? Are there any social costs to my using it?" Perhaps there are ethical components to those questions, but I'd think that they scarcely qualify as ethical questions at all. But that strikes me as the state of the art of marketplace reasoning these days.

      Delete
  5. So I feel that this discussion ties in with the ones we were having about the added work requirements for SNAP and Medicaid. If robots and AI take the jobs that recipients possibly could fill, will that mitigate the requirements? I doubt it. Far from creating a paradise where there is no need for work, or not as much of a need, it will only exacerbate the divide between the haves and have nots, and fuel social upheaval.

    ReplyDelete
    Replies
    1. Katherine - I know it has been a theory of some philosophers or futurists that a future in which robots do all the labor would free up humans for fulfilling leisure. I don't think that is how the world works. Here is Bloomberg columnist Matt Levine, from a post of mine from a couple of weeks back on venture capital funding:

      "You can imagine, if you want—and lots of techno-utopians do—that we are on the cusp of the end of scarcity, a new productivity revolution in which robots will produce everything necessary for human life, freeing us from the need to work. If that is true, or true-ish, or close to true, or possibly true in the very far future, then the big problem will be the distribution of prosperity: If robots produce abundance but eliminate jobs, the concern is that the people—tech founders and venture capitalists, probably—who own the robots will become unimaginably rich and powerful, while the people who don’t will be unemployed and dependent on those founders and VCs for their necessities."

      That dystopian vision, in which Google and Apple get all the work done, gather up all the money and run the world, while the rest of us have been rendered superfluous, may not be likely to come to pass (or perhaps it is), but it strikes me as considerably more possible than the idea of the human race enjoying a life of permanent leisure and lack of want.

      I don't want either version. I like my work. I get a lot of fulfillment and satisfaction from the nature of the work and the interpersonal relationships. I want to work at least 10 more years, if anyone will have me that long. I don't want to be rendered superfluous if I can still be a contributor.

      Delete
    2. Jim, I guess they don't say "Idle hands are the devil's workshop" for nothing. I'm with you, l don't want either extreme. too
      I have liked my job, too. However retirement is looming, the target date is June 1st. I chose the date, but I'm feeling weird about it. Everybody says, "Oh, you must be so happy!" Maybe I'll get around to being happy once we get all the details of Medicare A, B, D etc. figured out, and helping train my replacement. It would honestly be less work just to keep working.

      Delete
    3. Katherine, best of luck on your retirement. I think life changes always are stressful. I'm sure that you put some thought into this and it's a rational decision, even though your emotions may not be aligned to it yet. I hope you have many years of happiness with your husband in front of you.

      Delete
    4. Thanks Jim. I'm sure it will all work out. As you say, change is stressful. All the retired people I've talked to (except for the consummate workaholics, which I'm not) say that they enjoy retirement.

      Delete
  6. Just this morning I was reading about robotic harvesting equipment that harvests wheat fields in Kansas but can be operated by someone sitting in an office hundreds of miles away. I had always thought farming was one job that would never go away, because people have to eat. But the nature of farming and the number of people it employs can certainly change.

    ReplyDelete
  7. Jim,
    I find your response to me simply shocking. Have you read Laudato si?"
    There are no deliberate human interactions that have no ethical import whatsoever. Some may be relatively neutral, but rare is the day that what any of us does that has no impact on other persons. To be human is to be involved in the realm of ethics.

    ReplyDelete
    Replies
    1. Bernard, I'm sorry to shock you, and honestly I don't know why you should be shocked by the observation that people don't reflect much about ethics and don't feel beholden to ethical considerations. I think most of us live large portions of our lives that way. And I'd be surprised if that viewpoint contradicts anything in Laudato si. I think that's the sort of banal and self-interested human behavior that moved Francis to write it.

      Delete
  8. Jim, if you're simply reporting a factual condition, then you may well be reporting it accurately. But what follows from this situation? Surely not indifference, shrugs, or mere sighs!

    ReplyDelete
    Replies
    1. Bernard - no, I don't find the status quo which I'm reporting here acceptable; that's why I note them. That said, I don't think the issues we're touching on here comprise a very well-defined situation with a clear set of solutions in sight. If you detect shrugs and sighs in what I'm writing, I assure you they are not of indifference but rather a good deal of puzzlement. The integration of the economic means of production with high technology is a sleek and powerful locomotive roaring along at a high speed, and insofar as that situation raises problems, it's far from clear (to me, anyway) what can be done about them. My only suggestions amount to an exhortation to Think, Judge and Act. Maybe that's adequate to the situation. I don't have any brighter thoughts.

      Delete
  9. I enjoyed my work in the mental health system very much; I contributed to the good of society. I took early retirement at age 60. I wonder when I ever found time to work. I think we all should work less, get paid more for the work we do, and have more leisure. And tax the wealthy to pay for government services.

    Robots like humans have their flaws. The supermarket robotic express line check outs constantly make mistakes, and have to summon the attendant. On the other hand the non-express robotic check out lines do a pretty good job except for handing coupons; they always call for the attendant.

    Actually a combination of robot and attendant seems to work best. I don’t mind scanning items, or bagging them. If the attendant helps by bagging my items it speeds the line for everyone.

    ReplyDelete
  10. Jim,
    Your sleek and powerful locomotive racing at high speed is neither made by God from whole cloth nor the pure outcome of some impersonal natural forces. Some people are deliberately involved in designing and producing this engine and are actively involved in the management of the distribution of benefits and burdens this engine effects. Summarily, this management is manifest in the public policies that encourage or discourage or permit or forbid (etc.) the ways this locomotive functions.
    Public policy formulations and applications necessarily involve human choices and/or abstentions from choosing. These choices are ethical or moral choices and their outcomes are rightly imputed to specific people as morally praiseworthy or blameworthy.
    To the extent that any competent person is involved in the outcome of the prevailing economic-technological locomotive's functions, he or she bears some non-trivial responsibility for the outcomes of its distributions.
    Now, admittedly, there is no definitive "best way" to construct or modify this locomotive. But if Laudato Si and other corroborating analyses are to be taken seriously, we have to conclude that the ways in which your sleek locomotive is now and has been running has produced both some serious undeserved harms to some people to the undeserved benefit of some other people. As Laudato Si and other analyses have made clear, the demands of distributive justice are never to be sacrificed to economic or technological objectives.
    That there is no definitive, presently available better locomotive is no good reason simply to acquiesce in the maldistribution of benefits and burdens human agents are willingly cooperating in perpetuating.
    Let me explicitly acknowledge that there is little if anything for which I can rightly claim originality in the case I make here.

    ReplyDelete
    Replies
    1. "Your sleek and powerful locomotive racing at high speed is neither made by God from whole cloth nor the pure outcome of some impersonal natural forces. Some people are deliberately involved in designing and producing this engine and are actively involved in the management of the distribution of benefits and burdens this engine effects. Summarily, this management is manifest in the public policies that encourage or discourage or permit or forbid (etc.) the ways this locomotive functions."

      I fear that's not completely accurate, as you've described it here. The designers of our system are folks who lived 200 years ago or more. There are no "men behind the curtain" who are perpetuating it. It's being perpetuated by 6 billion or so of our fellow humans.

      If it's a locomotive, it's a runaway train. There is no engineer in charge of it, no brakeman who can stop it. No national government can stop it. As for the alternatives, all the others we've tried so far are worse, far worse. The specific issues we're talking about in the case of Google Duplex - spammers, scammers, greedy owners who put workers out of work - those bad things aren't a bug in capitalism, they're a bug in human nature. Crime and greed are not system-specific. Crime and greed have happened in every system we've ever tried.

      And from what I can tell, Francis doesn't want blow up the train. He wants to fix the locomotive. And fortunately, he's wise and humble enough to know that he lacks the technical knowledge to undertake the repair job. That is our job, we being those who are out in the world and taking part in the activity.

      Delete
  11. Re the locomotive. Obviously, it is currently creating huge winners and so-sad losers. So Pope Francis is asking people to stand in its way. This is his April intention for the Pope's Worlwide Prayer Network (nee Apostleship of Prayer):

    For Those who have Responsibility in Economic Matters
    That economists may have the courage to reject any economy of exclusion and know how to open new paths.

    ReplyDelete