There’s plenty of debate over whether AI is real. The very definition of Artificial Intelligence keeps changing — show a computer scientist from 50 years ago a modern algorithm’s ability to drive a car, identify images, compose text, or diagnose diseases, and they’d immediately conclude that AI was here.
A shot from the 2017 main stage.
But AI is brittle. It makes mistakes. At best, it complements human intelligence. It’s not artificial — it’s different. Is it still a cause for concern? Or will it never amount to a real threat to human work and jobs as little more than augmentation?
Why we need to act
Do we need to plan for AI now? Or should we wait and see? Two things say that yes, we need to act, strongly and immediately.
AI is a logical next step
The first is that AI is a logical progression of computer trends that have been happening for decades:
- Data centers gave us significant, centralized computing and storage power.
- Cloud computing made these data centers elastic, so you didn’t have to buy what you didn’t need.
- This meant sharing infrastructure, and “bursty” workloads, so anyone could analyze vast reams of information — which the modern, connected, mobile Internet was only too happy to provide.
- When you have that much data, you need algorithms to crunch through it.
- The best of those algorithms create better versions of themselves, which we call Machine Learning.
And that’s the state of modern AI and data science.
Not not going to happen
The second is what I call the “not not going to happen” argument. I heard a great version of this during a panel I moderated at the 2019 APEX symposium in Ottawa. I asked Deloitte’s Shelby Austin whether she felt organizations needed to act on AI immediately, or whether they could wait. “We don’t know which of the companies deploying AI today will win,” she replied, “but we know the winner will use AI.”
This is a good Occam’s Razor for AI adoption. We don’t know for sure which AI strategy will win out, or exactly how it will be deployed successfully. But we do know — given its tremendous power to make sense of the torrent of data modern society generates — that AI is not not going to happen.
The automation that machine learning and data science can bring is often a nonpartisan issue—given that it can bring tailored, personalized services to bear, but also can be used to cut costs and run balanced budgets. At the same time, everyone’s concerned about the invasion of privacy that AI might bring (whether that’s analyzing public data to infer private facts; reinforcing prejudices in policing and justice, or myriad other concerns.)
It’s also no surprise that AI consistently ranks high as a topic in our annual survey of technologies and domains. If you’re interested in the promises and challenges of AI and data science, you need to be at FWD50.
November 5—workshop day
On the first day of the conference, which is open to all ticket holders with a Conference + Workshop pass, there’s plenty lined up:
- We’ve got sessions on how AI will change the public sector, living through a transition to the next, digital economy, what next-generation citizen experience should be like, and finding better government outcomes with data, analytics and AI. These are being taught by global experts from organizations like Policy Horizons, and even the CTO of Estonia!
- There’s a 3-hour workshop on AI, sustainability, and ethics taught by Jerry Overton, who travels the globe running thoughtful, interactive workshops on ethics and AI that constantly earn him accolades.
We’ve sprinkled AI and data science content throughout both keynotes and breakouts, as well as Circlesquare, on the second day of the event:
- We kick things off with keynotes, including a talk from Sidewalk Labs’ Jacqueline Lu on whether data (and the algorithms that analyze it) can help us build better communities. Sidewalk’s roll-out has not been without controversy, so we’ve invited them to take the main stage and then have a discussion about the trade-offs of technology in a breakout session.
- If you want to think about the future, Science Fiction is a good place to start. So we’ve asked author Malka Older to talk about her imagining of future worlds. She’ll look at how to use those visions to focus on the right things today. (Fun note—this started with a tweet from Thom Kearney back in February; great to see it turn into something!)
- Georges Clement is using crowdsourcing, data science, and algorithms to track down New York’s worst slumlords. He’ll be explaining what he’s created, and how it’s changing the NYC rental landscape, in a breakout.
- If we’re going to rely on computers, data, and algorithms for our future, they’d better belong to us. Open source is only part of the conversation—so is the right to repair, and the sovereignty of our data. We’re assembling organizations like OpenCorporates and OpenStreetMaps to shine more light on this critical self-determination angle.
- The Circlesquare interactive discussion once again has an AI and Data Science corner, where we’ll talk about the impact of these technologies on Climate, disaster, and emergencies; regulation and compliance; finance and spending; and records and information management.
On the final day of the conference, there’s still plenty of AI content to keep you busy, including:
- A team from Oracle, including San Francisco-based data science expert Sherry Tiao, is running a breakout on data management in the AI/machine learning era on the Industry Innovations stage.
- As if that weren’t enough, Ramy Nassar and Rebecca Kain are running a 3-hour workshop on design thinking and AI, with particular focus on Canada’s world-leading AI guidelines. Entitled “Design Thinking for AI & Machine Learning: Methods & Best Practices for Building Citizen-Centered AI-enabled Services & Solutions” it’s a custom-built workshop format for those who want to dive deep into the ethical issues of building AI services.
If you’re interested in how AI and data science change government—and you really, really should be—then FWD50 is the place to be this November.