For some of us when we hear the words Artificial Intelligence (AI), we’re glad technology and machine learning (ML) are becoming more sophisticated to make our lives simpler (what would we do if Netflix shut down?). For others these super-humans pose great risks to our careers and privacy as they continue to blur the lines between fact and fiction. However, given that our interactions with AI are becoming more frequent, amongst the hype it can be difficult to understand how AI is affecting our daily lives.
Last week we attended Deloitte’s Ethics of Artificial Intelligence (AI) event as part of their #DisruptorsInTech series. This interactive session was hosted by The Founder of Deloitte Digital and Chief Edge Officer, Centre for the Edge at Deloitte Australia Peter Williams with a Panel of Australia’s leading AI experts including Kendra Vant EGM Data at Xero, Jade Haar Head of Ethics at NAB, Catherine Lopes Head of Enterprise Data and Analytics at AGL, Dr Robbie Fordyce | Lecturer in Communications and Media Studies (Big Data/Quantitative Analysis and Research Methods) at Monash University and Chetan Ganjihal AI Architect at Deloitte.
How AI is impacting our daily life
these companies. The greater the data the better your consumer experience (music to your ears).
Using anonymised location data from our smartphones, Google Maps (Maps) has access to vast amounts of data being fed to its algorithms meaning Maps can reduce your commute by suggesting the fastest routes to and from work.
How do companies like Uber determine the price of your ride? How do they minimise the wait time once you hail a car? How do these services optimally match you with other passengers to minimise detours? The answer to these questions is ML.
According to research by Massachusetts Institute of Technology (MIT) as many as 10 million self-driving cars are predicted to hit public roads by 2020, and when they do, they’ll have difficult decisions to make. The urgency behind building decision-making systems to solve the classic ‘trolley problem’, in which a person — or AI/ML is forced to decide whether to sacrifice the lives of several people or the life of one.
US company HireVue, has developed interview technology, claiming to enable companies to interview more candidates in the initial stage rather than scanning through hundreds of CVs, while providing a more reliable indicator of future employee performance free of human
bias. Yet how do they remove the bias from those programming such technology and what impact does this have on unskilled workers?
The Dark Side
As companies continue to invest millions into designing next-gen tools and products powered by AI, we should not assume that these companies will automatically step up to the ethical and legal responsibilities if such technologies fail. The Public trust around AI requires fairness, transparency, and accountability more than ever following the Cambridge Analytica scandal. People are now returning Google Home and Alexa, deleting Facebook accounts, turning off location and voice settings on their smartphones, creating multiple email addresses and making attempts to completely erase their digital footprint. Yet even AI experts can’t agree on a single definition of fairness. Which leads to the question of who are the affected groups and what metrics should be used to evaluate the impacts of AI? for example the impact of bias within the algorithms.
Technology is agnostic and will be used in both good and bad way. However, as technology leads and public policy lags, individually and collective at home, work and everywhere in between we all have a responsibility with who, what and how our data is shared and shapes the future.
Does AI excite or scare you? Let us know in the comments and don’t forget to like, share and tell your friends. For more info on AI at work check out our insights on How Machine Learning and AI are impacting the way we Work.