The MIT Technology Review hosted an artificial intelligence conference last week (AI). I’ll only mention the bad technology in the video conferencing as well as the conference web site and agenda management once more. The issues were serious, which I hadn’t expected, but wending my way through them led to some fascinating conversations. The first and third days will be the subject of this column, as the middle day was more research-oriented, and real-world effect is what concerns me.
The keynote was divided into two sections. Andrew Ng, the ever-present, was the first to speak. When the question was posed, “How do people create an AI first business?” it was my favorite part of his presentation. Andrew’s straightforward response was, “Don’t do that.” While AI is important and will have a broad impact on business and society, he was the first of many speakers to remind the AI technologists that the problem that needs to be solved comes first. The right tools must then be used, and while AI is becoming increasingly important, it is still just that: a tool.
Another argument Andrew Ng made was that, while many in the industry now associate data lakes and massive data with AI, this isn’t the case. With much smaller data sets, some domains can get accurate results. Again, it’s important to understand and identify the business problem before attempting to solve it with computation.
Michelle Lee, VP, Machine Learning Solutions Lab, Amazon Web Services, delivered the second half of the keynote (AWS). She outlined seven takeaways from AWS’s AI ventures. Though they were fine, it felt like “Déjà vu all over again,” in the immortal words of Yogi Berra. The steps aren’t new, and they’re not just for AI; they’re good practices for any software development project. Phase three, “Technical and domain experts collaborate,” is an example. Is that true? Of course it is. They had to collaborate even back in the mainframe, waterfall approach days. What she said, however, is also important for AI practitioners to hear, since so many are still coming out of academic research groups and big organizations, and people need to be taught that this matters.
The next session would have been fascinating if it hadn’t been about technical difficulties (for which I complained twice…). Alex Waller, co-founder and CTO of The Routing Company, addressed routing concerns, as the name of his company suggests. One of the company’s main messages is that it is trying to give public transit agencies that serve sparse areas (think semi-rural) the opportunity to schedule transportation in the same way that ride-sharing companies do. The only problem was that they used Houston as an example, which does not seem to be suitable for the domain. I’ll make an attempt to contact Alex because it appears to be an intriguing opportunity.
On Thursday, the first major session focused on AI ethics and future legislation to handle the legal and social risks. Julia Reinhardt, Mozilla Foundation Fellow in Residence, kicked things off by discussing the disparity between internal company standards and the need for external standards. There was a good list of questions for businesses to pose and answer while establishing internal corporate ethics standards, and she pointed out that the public wants a clearer understanding of how well companies are adhering to agreed ethical rules. Julia also stated that the EU’s initial proposals, which are currently being developed, would not outright prohibit facial recognition.
The “hidden AI workers” were identified in detail by Saiph Savage, Director, Civic Innovation Lab, Universidad Nacional Autonoma de Mexico. Even in the United States, people doing gig work to mark images, transcribe audio, categorize content, and do other data-management tasks earn less than $2 per hour. Saiph’s work involves a plug-in to a few gig sites to see if improved worker contact can help with training and wages.
Abeba Birhane, Complex Software Lab, University College Dublin, gave the session’s final presentation. She echoed Andrew Ng’s argument, stating that AI appears to be a hammer in search of a key. It’s just a tool; it shouldn’t be thrown into anything without hesitation just because it’s cool.
Her other major argument is extremely important. People who are affected by AI systems are not seen as stakeholders by the existing owners of AI systems. That connects to Julia Reinhardt’s point about the importance of government regulations in enforcing the need. It was a good way to put the session to an end. In other news, I’ll be writing a summary of “A Citizen’s Guild to Artificial Intelligence” in the near future. The recommendation for an FDA-style organization to handle AI is one of the book’s more important points, which I address. Julia Reinhardt said the same thing during the Q&A portion of this session. I liked Julia’s idea, which came from several of the book’s writers. This is something that needs to be taken seriously.
Three speakers took part in the life session. In his AI business, one vaguely mentioned corporate culture. Another person listed self-driving cars. He went over the same technical ground as he had previously. When asked about legislation, he only liked those that were enacted in response to the possibility that human drivers would no longer be needed (how rear-view mirrors are placed). When it came to other legislation and liability, the response was simply that the government should leave them alone because AI practitioners are somehow special.
Julian Sanchez, John Deere’s Director of Emerging Technology, sparked curiosity and provided the meat of the discussion. The farm tractor company, that is. He gave an excellent presentation on how AI is a key component in newer systems, both in front of and within the cockpit of a large sprayer. For example, vision and analysis are assisting in the differentiation of crops and weeds, potentially resulting in a 90 percent reduction in herbicide usage by concentrating its application.
He also listed an element of edge computing that many people in cities are unaware of. It isn’t only confined to your smartphone. Since farms are usually out of reach of today’s broadband, heavy computing, such as inference engines, must be done at the edge. Agriculture is a fascinating business, and one in which I’ve worked on the client side with robotics for greenhouse planting. It was interesting to hear about John Deer’s field plans.
I wasn’t thrilled with two of the speakers in the final session, but the third made it worthwhile. In one, the CEO of an AI company offered the standard short-term view of “augmented intelligence,” which is where the market is now, and used it as a justification for why there won’t be any more job losses as AI’s capabilities advance. Another offered valuable information regarding the loss of “middle jobs,” but was unable to clarify how this loss could be prevented or reversed. The speakers demonstrated the difficulties that certain professional or academic people have in comprehending the true effect on workers.
Professor of Law Veena Dubal of UC Hastings College of Law gave a critique of the gig economy. Her research comparing the taxi and ride-sharing industries was interesting, particularly when she found out that the disparity isn’t due to technology. Taxis used a shared-call system similar to the one developed by Uber and Lyft. The distinction is in the business model and how costs have been transferred from the company to the employee/contractor, as well as how employees are handled to increase income. She also said something I’d never learned before but immediately understood: algorithmic Taylorism.
Many of the speakers, including myself, agree that we should not avoid developing AI. It’s extremely beneficial, and it’ll pervade society and industry. What’s needed is an appreciation that AI’s effect on society will be more than just technological. We will face major challenges in the coming decades if we do not find out how to manage it for the greater good. Work’s sense may need to change, as will our approach to supporting social systems. It’s not good enough to let AI have its way and then respond to problems, as some speakers suggested. We need to be looking ahead and making plans about what we expect to happen. After all, artificial intelligence isn’t the only form of intelligence that exists.