Bruno Latour. Science in Action.
This very short section speaks about the rhetoric inherent in science practice.
Reading question:
Rhetoric, pp 30-33
1 In some of the robot visionaries’ articles (such as by Bill Joy, Ron Arkin and Ray Kurzweil) there appears to be an appeal towardsargument from authority in isolated texts, just as described by Latour. Yet these are opinion-setting pieces for systems that have not yet been built. Is there an alternative for a more rhetorically balanced presentation of these ideas so that the readers have a fairer chance at correctly interpreting the opinions and concepts expressed?
Drew McDermott. Artificial Intelligence meets Natural Stupidity.
McDermott’s paper certainly caused ripples when it was first published. It represents a state of mind and a snapshot of time that is of great value to revisit.
Reading question:
- The inflationary spiral that McDermott describes for AI in 1981—do you think this phenomenon exists in robotics today? Describe its instantiation in our field if you believe it exists. If you believe we do not suffer from this problem, describe why you think we have managed to avoid it.
Rhetoric of Robotics is an unpublished manuscript by me. Please read it only as background. No questions required on it.
Jessica said:
Latour
First of all, I think that in these cases the authors are *trying* to provide a balanced presentation of what they see as a likely future. The audiences for these pieces are not scientists or experts, and furthermore the authors are speculating (as opposed to describing some concrete scientific experiment), so the idea that there is a “correct” way of interpreting things seems something of a fantasy.
That said, I think that a great way of combating wayward rhetoric is through balanced debate. When 2 or more authority figures engage in a direct debate, the audience does not have to come up with dissenting arguments and the associated authority figures necessary to support them–the debaters do that work for them. Of course, the problem of finding a good match of authority figures still exists, but that task can now fall to the moderator or editor.
——————–
McDermott
I’m not sure if we still suffer from this problem or not.
On the one hand, in my admittedly brief experience so far I’ve seen many examples of authors publishing source code along with their work (example: http://cvlab.epfl.ch/research/detect/dbrief). This directly works to solve the problem because code implementations are concrete and debatable. That means that we have fewer disconnects between what’s claimed in the paper, and what has been concretely implemented.
On the other hand, I think there is still a huge emphasis on doing “novel” work, to the extent that we only have incomplete solutions to all of the “big” problems. For example, last Friday at the RI seminar, an audience member asked this question: “Why do robots interacting in unstructured ‘real world’ settings still move so slowly? What is the bottleneck? Is it computation power, hardware, algorithms?”. The speaker’s answer was very interesting. He said that, if a researcher could show a method of robot towel-folding that folded 5 towels in 20 minutes, then that’s good enough for them to graduate and so they’re done. For the next student that comes along, reducing that 20 minutes to something more human-like is frankly not very interesting, so the student would move on to another problem. The assumption seemed to be that the slowness problem would “solve itself” some unknown but definite time in the future.
George Lederman said:
1. I don’t think these guys (Kurzweil, Joy, Arkin) aim for a balanced presentation. The reason they took the time to write the articles is because they believed that the current discussion was somehow wrong or missing some viewpoint. They believed that by writing their articles, they enriched the discussion.
I disagree with the question–(that they used argument from authority)–Joy’s citation of the unabomber is about as far away from authority as you can get. If anything, you could call his technique “argument from the despised.”
I think if you want a more balanced approach within a single article you could opt for pure logic which is what Brinsjord tries to do in his analysis of the unabomber’s argument. I don’t think that Brinsjord was successfuly though. For a properly balanced approach I think that Jessica’s has the right idea: you need to read a variety of articles.
Lastly, when you (Illah) assigned us to read Kurzweil, I think you were using “argument from authority” in almost ironic way. His viewpoint is so absurd and counter-intuitive that you had us read a supposed expert so that we would not immediately dismiss his arguments.
2. It seems that McDermott’s main criticism is people overstating their work (whether it is in the title, in usage of natural language, or believing that a program will be easy to implement before it has been written.)
It is not exactly clear to me why this is such a big problem. If people didn’t slightly embellish on their work, it would be difficult to get other people interested or to win more funding. And ultimately, I would say that people have achieve incredible levels of “understanding” through programming. When I google something, the results are almost always what I am looking for. Although google doesn’t “understand” me, the end result is as if it did understand me.
Does this problem exist in Robotics today? I do not feel I am properly equipped to answer this question. But I will describe the current state in my discipline which is structural health monitoring. People often use new machine learning in attempt to classify ever more nuanced damage. Sometimes they embellish their work (saying they can accurately predict remaining useful life of the structure). In general, I find that the field is very exciting and that many practitioners often understate the power of the technique compared with the current state of the art.
mklingen said:
1. I’ve found that many of these “visionaries” present other’s work as though it were fact, and do so in a rapid-fire fashion, maybe citing only one line at a time. They do so for the purposes of speculation, and their presentation method ends up obscuring what the works they cite actually say. The problem is that there is only one voice (the author’s) in the mix. It would be nice to hear some of the voices of the works Kurzweil and others cite, even if those voices are opposed to Kurzweil’s “vision.” Direct quotes, debates, survey literature, etc.
—
2. First, let me say that reading this paper was very refreshing. It put into words a lot of things which make me uncomfortable about robotics research but which I was not able to articulate.
I assume that what you mean by the “inflationary spiral” is the loose use of English to produce “wishful thinking” in the field, which leads to further elaborations using the same terminology, and further wishful thinking. The other compounding factor he mentions is the tendency of researchers to write papers which detail only a “preliminary” implementation of their ideas as though the full implementation had already been made.
Both of these things are absolutely occurring in robotics right now, and are bound to hurt our reputation someday. Take for instance the name of the robot I work on: “HERB”, which stands for “Home Exploring Robotic Butler.” HERB explores nothing (its base is teleoperated), it’s not in a home but in a lab, and it certainly hasn’t done anything in the way of butlering. In fact, HERB spends 99% of its time sitting around, powered down, without any sensors running, its arms hanging limp. Then, whenever a demo rolls around, frantic, panicked students hack together something for the specific demo situation (often hardcoding specific movements), show it to the press, and then throw away the machinery the hacked together for the demo. The press then writes a fancy article about how HERB “cooked a meal” or whatever.
If they had just called it “Mobile Manipulation Research Platform 1” or something, it would have been much more honest.
A subset of robotics where this problem is particularly revealing is in Computer Vision. The vision literature is plagued by misleading terms like “recognizer,” “detector,” “deep learning,” (or just “learning”), etc. Results come in the form of misleading boxes or outlines in an image with a convenient label (like “Face”) over it. Rarely is it ever discussed what it means to “perceive” something, or whether putting a box around something with an arbitrary label constitutes “understanding.” The authors just call it that, and the terms become confused. Soon, you’ve got decades of people calling arbitrary textual labels “attributes”, and bounding boxes, “detections” until you can’t even get your head around what these terms are supposed to mean.
Then, when a roboticist tries to use a state-of-the-art computer vision algorithm, he’s taken aback at how useless it is, and he goes back to the computer vision expert, who shrugs his shoulders, and wonders why the roboticist is even looking for the kind of “understanding” that his algorithm can’t deliver.
I found this line from the reading hilariously relevant:
.
“Most AI researchers react with amusement to proposals to explain vision in terms of stored images, reducing the physical eye to the minds’ eye ”
… But this is exactly what computer vision is about now! It’s all about “big data,” – arbitrarily linking together images to other images in databases, (or their statistical properties) and claiming that this is vision, and understanding.
These sorts of words get picked up by the press, and they are invariably misunderstood and hyped. Even the researchers begin to believe the hype, to the point where they begin to use their hijacked language terms (“learning”, “understanding”, “perceiving”, etc.) to describe the real philosophical thing in itself . The only people who really get it are the laymen who come into our labs and laugh about how slow and useless our robots really are!
Anonymous said:
Latour2
1 In some of the robot visionaries’ articles (such as by Bill Joy, Ron Arkin and Ray Kurzweil) there appears to be an appeal towards argument from authority in isolated texts, just as described by Latour. Yet these are opinion-setting pieces for systems that have not yet been built. Is there an alternative for a more rhetorically balanced presentation of these ideas so that the readers have a fairer chance at correctly interpreting the opinions and concepts expressed?
I’m not sure there is a more rhetorically balanced way to do things. If we’re going to write any sort of opinions at all, then inevitably, there’s going to be rhetoric involved. Otherwise, the piece would just be a list of facts and conjectures, which would hardly make for interesting reading, even if it would be free of the biases and manipulations of rhetoric. Even if one considered ideal (which is far from a given), I don’t think it’s particularly realistic to imagine people to present there opinions in this manor in which they merely state them rather than actually making an effort to persuade the reader that they’re correct.
And, if there’s going to be rhetoric and attempts to persuade the reader, I don’t think argument from authority is necessarily always a bad thing. It certainly *can* be used in bad ways, in which the reader is given the impression that the source has far more authority than they actually do (or more relevant authority), but in general, I think arguments *should* carry more weight when supported by people with expertise on the subject. If not, then what meaning does the expertise have in the first place?
I think there are two big dangers to argument to authority that we do need to watch out for, though. The first is presenting the authority’s as a certification of fact. The beliefs of someone who’s spent their life becoming an expert in a subject may have more weight than a random person with an average level of expertise, but that doesn’t mean their beliefs are correct (unless they’ve been proven as fact, in which case the authority shouldn’t be necessary).
The other concern is presenting a person as having far more authority than they really do. This is very commonly seen in news articles, where people who are well-known in a field or just generally regarded as smart will have their opinion treated as sacred even when it’s not on their areas of expertise. An example of this is when articles quote Stephen Hawking or Albert Einstein on arbitrary matters as if anything they say must be true because they are simply too smart to say something incorrect. An exaggerated joke example of this is at http://xkcd.com/799/.
McDermott
The inflationary spiral that McDermott describes for AI in 1981—do you think this phenomenon exists in robotics today? Describe its instantiation in our field if you believe it exists. If you believe we do not suffer from this problem, describe why you think we have managed to avoid it.
I think I pretty much fully agree with Jessica’s response here. On the one hand, I haven’t seen any robotics publications that do the “we only implemented a preliminary version, but we’re confident the better version we have in mind will work” think that McDermott talks about. I’ve certainly seen papers that seemed to present best-case-scenario results, but there are still just about always results that at least do involve an implementation of the exact algorithm they described. They might pretend that what they’ve implemented solves more than it does, but they won’t pretend to have solved an problem with an algorithm they haven’t even implemented.
On the other hand, I do think the sort of dead-zone of research that would be useful for advancing a field but doesn’t feel novel enough to be a thesis can exist in robotics. There are certainly areas where incremental improvements would be lovely but not thesis-worthy (unless done in a groundbreaking way). That said, just because something’s not thesis-worthy doesn’t mean no one’s doing it. Sure, PhD students in their fourth or fifth year often aren’t working on anything that’s not part of their thesis, but it’s seemed pretty common to me for masters students, undergraduate students, or PhD students who are still going through their qualifiers and not working on their thesis yet to work on smaller problems that aren’t considered thesis-worthy. Last semester and summer, a large group of students from my lab spend a huge amount of time improving our small-size robot soccer team. Not all of the students working on the problem are going to be working on the team for their thesis. It was just a small project on the way.
I don’t know about other schools, but from what I’ve seen, plenty of professors at CMU are happy to have their students spend some time, especially early in their program, working on smaller, non-thesis-worthy research. If making a robot fold a towel faster isn’t a thesis, someone still might do it as a smaller project in their first few years as a PhD student. If it’s too hard to do in that time, then there’s a good chance the problem’s challenging enough that someone will find a way to properly pitch it as a thesis.
Also, from what I’ve seen, robotics theses are not entirely dependent on results. While making a robot do something no robot’s ever done before, I’ve definitely been to thesis talks where the contribution wasn’t that they made a robot do something new, but that they made it do something using a new technique that had potential advantages over existing techniques. For example, I recently went to a thesis defense of a student who was working on using dynamic programming to make a robot walk. He hadn’t made his robot do anything that Boston Dynamics hasn’t already released videos of a robot doing. The controbution was that he’d done it using dynamic programming, which he argued had potential advantages over techniques if it was further developed.
Max said:
Accidentally posted anonymously. This was Max.
Seun Aremu said:
1. Yes, I believe if theoretical systems of the future were presented as derived technology of modern day, with explained extrapolation founded on sound theory and modern physics. The problem with current description of futuristic technology is that a majority of the ones presented lack the sound tangible application of modern physics which gives the reader a false impression of growth in technology. I believe this lack of sound tangible scientific theory according to Latour’s explanation, is why it is very difficult to argue about proposed futuristic technology, a majority of the time, there is a lack in credible scientific reference to defend any side of the argument or make a sound opinion about the presented concept.
2. I cannot speak for robotics as whole, but in of control theory, I have seen many examples of this problem McDermott describes. Often in published control theory journal articles, there are not too many problems being solved, mainly various displays of different forms of the same concept, with no notion to solve a particular problem, but an incomplete introduction of a “half” solution. In McDermott’s case he mentions that there are some work that do not necessarily stimulate the mind, but further gives a progressive hope to the field, this is the same ideology that I believe continues to remain in the field of control theory when I see some papers are published but only have partially simulated results as foundation to their derived concept.
joelsimon6 said:
In some of the robot visionaries’ articles (such as by Bill Joy, Ron Arkin and Ray Kurzweil) there appears to be an appeal towards argument from authority in isolated texts, just as described by Latour. Yet these are opinion-setting pieces for systems that have not yet been built. Is there an alternative for a more rhetorically balanced presentation of these ideas so that the readers have a fairer chance at correctly interpreting the opinions and concepts expressed?
After reading this I feel promoted to rethink what the relationship, if any, ought to be between the creators of ideas in society and those who present those ideas to the public. The most apparent solution for the prompt question is for the public to gain their information from a debate between those in the field. This is starkly different from authors like Joy Or kurzweil applying their rhetoric directly to the public. When authors do so, their primary purpose it to make their opinion heard not provide a purely balanced opinion. The counter argument to this is that it aught to be the readers responsibility to read varying opinion, but it think the Joy article is proof that sometimes a writing can be so loud kit washes out counter voices. Furthermore, there is still subjectivity in any debar moderation and every input of news is mediated to some degree.
The inflationary spiral that McDermott describes for AI in 1981—do you think this phenomenon exists in robotics today? Describe its instantiation in our field if you believe it exists. If you believe we do not suffer from this problem, describe why you think we have managed to avoid it.
So I don’t have any experience reading robotics research papers, that said, from the demonstrations I have seen and papers I read it certainly seems that a inflationary spiral exists partially. Robots are nearly always advertised in an exaggerated way, far above their actually functionality. From a psychological point of view this seems straightforward, those working on a robotics project want to think of themselves as working on exciting research and it seems more exciting to them naturally. Part of what draws us to projects in the allure of the future potential, which often comes across and mistaken for the current state of progress. Media reporting seems to compound this causing other researching to feel the need keep up.
Talha Rehmani said:
Latour
I would like to focus on the example of Bill Joy. I think most of his article was merely theories or guesses about the future. Neither did he support his arguments or conjecture about the robot futures with a good number of references. So, we really can not put them in the category of “scientific” facts.
To keep the balance, I think it is very important that the author present the both sides of picture to his audience. To me, it is okay for an author to go in more details about his area of interest or his side of story as long as he is properly backing his ideas up with good references, but it is equally important that he should raises the concerns of other people who do not agree with him or have a different point of view on the subject. This is important as most of the audience is non-technical or “isolated”, so ethically it is really important to keep a balance in our scientific writings. It would help the isolated audience to effectively form an opinion on any matter and the author will also earn a lot of respect in the eyes of people. This approach would also help the policy makers or funders in making a sound decision about a scientific project or research.
McDermot
Like my friends here, I can not speak for the whole Robotics but I agree that this problem exist and its common to see people claiming and publishing things which are not properly resolved. I am currently working on a control system for a chemical Vapor Deposition System to graphene. It’s a very complex task where you have to feed right amount of gases at certain temperature and pressure to get the high quality graphene. I have read so many papers on this from various people who claimed that they have completely resolved certain control issues but when I actually tried to implement their coding techniques or algorithms, I found out that most of these are just techniques which were probably never implemented. It took me a long time to fix those bugs and build up on them. I would agree that a source code could help in cases where somebody is trying to build up a version 2.0 but this could also raise some issues like plagiarism. In short, I think inflationary spiral still exists and it could be minimized if scientist show more responsibility towards their research and publications.
Nico Zevallos said:
1) Kurzweil and other speculative writers end up relying much more on the imagination than other sources. They can weave this fantastic world and we find ourselves filling in the details ourselves. I think perhaps it is because unlike other fields, speculative science has an established place in our minds as a predictive fiction. Nobody ever asked Asimov for sources, yet people still see his writing as forward thinking and important.
I think the way around it is to disrupt the fantasy of the one man lab. I think many people operate under this illusion that the world of cutting edge robotics is run by a few teslas and edisons, and although there are a few people who really are a force unto themselves, much of the grunt work is put in by large international labs and companies working incrementally. They really should have more of a say in these matters, although Ican’t see that happening until the very tantalizing and very american ideal of the rugged individualist tinkerer is dispelled.
2) We absolutely suffer from this problem in AI (I’m looking at you, machine learning) but robotics has the wonderful boon of having to look like its working. Those crippling bugs on V1.0 will probably look like crippling bugs on an actual machine, so at least our grounding in engineering requires at least a small amount of utility. Of course, as everybody in robotics knows, there is a massive amount of work put into embellishing those kind of demos. The funny thing is that because these robots are expensive, many people work on them for many years, so there is a lot of pressure to make something that works reliably. That means that something that was a one time trick shot will either have been improved or scrapped. Most peoples thesis work is eventually made robust if it will be useful. However that work will never be published scientifically as a solved problem. Now these robot bodies ground us, but they also have the unfortunate side effect of being able to capture the imagination of anyone watching something manipulate the real world. I think within the field it is not a.big deal, and there seems to be an understanding of how stupid robots are. The issue is again with our self representation outside, in the media, and in the social consciousness.