AI and the future of humanity

In this captivating keynote lecture, renowned historian and philosopher Yuval Noah Harari presents his thoughts and predictions on ‘AI and the Future of Humanity’. His address delves into a multitude of questions surrounding this topic, such as: How AI is poised to influence the shaping of culture; the potential dangers to humanity as AI acquires a grasp on human intimacy; whether AI signals the termination of human history; the possibility of regular individuals creating robust AI tools independently; and the need for regulation of AI.

The Frontiers Forum arranged this insightful event. This dedicated organization strives to forge connections among global communities from science, policy, and societal domains, to expedite worldwide scientific ventures. The event, held on April 29, 2023, in Montreux, Switzerland, was produced and captured on film with assistance from Impact.

Yuval Noah Harari, who delivered the keynote address, is recognized as the prolific author of bestselling books like ‘Sapiens: A Brief History of Humankind’ (2014), ‘Homo Deus: A Brief History of Tomorrow’ (2016), ’21 Lessons for the 21st Century’ (2018), and the ‘Sapiens: A Graphic History’ series (introduced in 2020, in collaboration with David Vandermeulen and Daniel Casanave).

Road trip to Brisbane madness

The long, twisted road to Brisbane had been calling my name for ages, a siren song drowned out by the white noise of floods and plagues. Yet, I finally succumbed to the madness, embarking on a nine-day odyssey of asphalt, sweat, and steel to reach the sun-scorched hellscape of the Queensland capital.

Narrandera

The first leg of the journey found me collapsed in the dusty arms of Narrandera, seeking refuge in the eerily vacant Star Lodge – a long-defunct hotel turned into a bizarre, booze-free sanctuary for weary travelers. The metamorphosis had bestowed upon the old haunt a new kitchen and a lounge room in place of the ‘front bar,’ the ghosts of drunken revelry replaced by the uneasy silence of the sober.

Bingara

My next stop was the surreal town of Bingara, where time seemed to have taken a wrong turn and stumbled into the twilight zone of art deco. Here, I discovered the Roxy Theatre, a relic of a bygone era, and the Royal Hotel, a madhouse of booze-fueled chaos. My route to this peculiar outpost was an ill-fated adventure in itself, a misjudgment that led me down a dirt track riddled with creek crossings and an overwhelming sense of foreboding. Surely, there must be a better way to reach the depths of Brisbane…

Brisbane

Three days I spent in the belly of the beast, Brisbane, a city teetering on the edge of modernist oblivion. My temporary lair was the Brisbane Manor House, a ramshackle den of travelers and vagabonds hidden beneath the veneer of a Queenslander. The days slipped by in a haze, topped off by a party on Friday night.

Tamworth

The party’s aftermath left me reeling as I embarked on the grueling trek to Tamworth, the heavens unleashing their wrath in the form of a torrential downpour. My arrival at the Tamworth Hotel was a sodden mess, forcing me to brave the raucous bar scene in search of sanctuary. A small blessing arrived in the form of a heated room, offering warmth and solace as I prepared for the next leg of my descent into the unknown agrarian heartland.

Forbes

The original plan was to reach West Wyalong, but my battered body screamed for mercy, leading me instead to the haven of Forbes. The skies threatened rain, but the storm never came, allowing me a moment’s reprieve in a cozy motel. A David Bowie documentary provided a fleeting glimpse of hope amidst the encroaching darkness.

Melbourne

The final stretch back to Melbourne was long and arduous, but the previous night’s rest fortified me for the task. In retrospect, four days would have been more reasonable to endure the hellish ride to Brisbane on two wheels; the three-day gauntlet was a cruel test of human endurance.

And so, my journey came to an end, a fever dream of miles and madness imprinted upon my soul. The road is a harsh mistress, but her allure remains ever potent.

Road trip to Brisbane from Melbourne (Itinerary)

With a steely resolve and a taste for the wild unknown, I’m tearing off on a 9-day, petrol and coffee-fueled escapade aboard my iron steed, cutting a blazing path from Melbourne to Brisbane via the agrarian inland route. As I pierce this sunburned land’s heart, I’ll chronicle my exploits and dish out the untamed truth through my blog.

1) Sunday, 23 April Narrandera

In the heart of Riverina, Narrandera slumbers by the twisting Murrumbidgee River, a haven where history and the bizarre fusion of nature’s allure tangle in a feverish dance. This obscure outpost, a realm of marsupial mystique, draws in wandering souls seeking communion with koalas and the tranquil embrace of riverside wanderings.

Travel: 420 km, 4.40 hours

2) Monday, 24 April Bingara

In the wild, far-flung reaches of the New England bush, Bingara rises like a half-forgotten memory of the gold lust that once possessed the land. This strange, time-warped village lures travellers into its nostalgic grip, inviting them on a hallucinatory journey down the winding Gwydir River and through the ghostly echoes of its heritage buildings.

Travel: 763 km, 8.40 hours

3) Tuesday, 25 April Brisbane

Gone are the days when Brisbane was considered hedonistic and boorish; the city has emerged as a thriving, cosmopolitan hub. With its (nervous) arts and cultural scene and stunning riverfront precincts, Brisbane may now even rival its southern counterparts (still, I liked the 1970s yobbo fantasy movies of the Melbourne blokes going “up north”).

469  Kms 5.16 hours

4) Wednesday, 26 April Brisbane

5) Thursday, 27 April Brisbane

6) Friday, 28 April Brisbane

7) Saturday, 29 April Tamworth

Renowned as Australia’s horrible music capital, Tamworth is a lively regional city steeped in music, agriculture, and picturesque landscapes.

574 kms 6.4 hours)

8) Sunday, 30 April West Wyalong

Once a thriving gold mining centre, West Wyalong now serves as an agricultural hub, drawing visitors with its crooked main street and heritage architecture. The town offers a true taste of rural Australia.

Travel: 583 km, 6.4 hours

9) Monday, 1 May Melbourne

Travel: 567 km, 6.1 hours

Introduction to AI and AI Safety (videos)

Artificial Intelligence (AI) has come a long way, from theoretical concepts to practical applications that shape our work and everyday lives. Crucial AI concepts start with the distinction between Narrow AI and Artificial General Intelligence (AGI)

AI safety refers to the research and practices to ensure artificial intelligence systems operate reliably, ethically, and without causing harm to humans or society. As AI systems become more advanced and integrated into various aspects of daily life, the importance of AI safety has become increasingly evident.

What is AI Safety

AI Safety is the interdisciplinary study of ensuring that artificial intelligence (AI) systems are designed and deployed responsibly, minimising risks and maximising benefits for humanity. This field addresses concerns about AI’s potential unintended consequences, biases, and misuse by focusing on areas like robustness, interpretability, and value alignment. Robustness emphasises creating AI that performs reliably even in uncertain conditions. Interpretability involves understanding an AI’s decision-making process. Value alignment ensures AI systems align with human values and ethical principles. AI Safety aims to develop strategies and techniques that guarantee AI systems remain beneficial, trustworthy, and under human control.

Source: No AI Robots Sign. Openclipart.org. Published 2023. Accessed April 2, 2023. https://openclipart.org/detail/340345/no-ai-robots-sign

In the context of AI, what does ‘under human control’ mean?

“Under human control” in the context of AI means that an artificial intelligence system’s decision-making, behaviour, and actions are guided, monitored, and supervised by humans. It ensures that AI systems operate within the bounds of human-defined objectives, ethical principles, and societal norms, preventing them from causing unintended harm or acting autonomously in undesirable ways. Human control includes oversight, intervention, and adjustable autonomy, enabling humans to influence, correct, or halt AI systems when necessary. Maintaining human control is vital for ensuring AI safety and promoting responsible development and deployment.

What does ‘autonomous mean’ in the context of AI

In the context of AI, “autonomous” refers to the ability of an artificial intelligence system to perform tasks, make decisions, or take actions without direct human intervention or continuous supervision. Autonomous AI systems can perceive their environment, process information, learn from experiences, and adapt to changing circumstances to achieve specific goals. The degree of autonomy can vary, ranging from simple decision-making in narrow domains to complex, general-purpose problem-solving. As AI systems become more autonomous, concerns about safety, ethics, and alignment with human values increase, making the need for responsible development and deployment of AI more critical.

Who are the influential AI ethics organisations in the United States?

  1. Partnership on AI: Founded by major tech companies like Google, Amazon, and Microsoft, the Partnership on AI aims to ensure that AI benefits humanity by conducting research, promoting best practices, and providing a platform for open collaboration on AI-related topics. https://www.partnershiponai.org/
  2. AI Now Institute: The AI Now Institute, based at New York University, focuses on the social implications of AI, advocating for responsible AI practices and policies that address bias, fairness, accountability, and transparency. AI Now Institute: https://ainowinstitute.org/
  3. Centre for Human-Compatible AI (CHAI): Affiliated with the University of California, Berkeley, CHAI researches value alignment, AI safety, and the long-term societal impact of AI, aiming to develop AI systems that are provably beneficial to humanity. https://humancompatible.ai/
  4. Future of Life Institute (FLI): FLI is a nonprofit organisation dedicated to mitigating global catastrophic risks, including those posed by advanced AI. They support research and initiatives to ensure AI development aligns with human values and is safe for society.https://futureoflife.org/

Who are the influential AI ethics organisations in Australia?

  1. Australian Human Rights Commission (AHRC): Although not exclusively focused on AI, the AHRC addresses ethical concerns related to AI and emerging technologies. They work on promoting human rights and preventing discrimination in the development and deployment of AI systems. https://humanrights.gov.au/
  2. Data61: A part of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Data61 is involved in AI research and development, including AI safety, ethics, and policy. Data61 works to create a responsible AI ecosystem in Australia. https://www.data61.csiro.au/
  3. The Responsible Artificial Intelligence (AI) Network, a world-first cross-ecosystem program to support Australian companies in using and creating AI ethically and safely, will be launched today by Minister for Industry and Science, the Hon Ed Husic MP and the National AI Centre, co-ordinated by CSIRO, Australia’s national science agency. https://www.csiro.au/naic
  4. Gradient Institute: An independent research institute, the Gradient Institute focuses on developing the theory and practice of ethical AI systems, ensuring they are designed and deployed for the benefit of all people. https://gradientinstitute.org/
  5. eSafety Commissioner: The Office of the eSafety Commissioner is an Australian government agency dedicated to promoting online safety. They address issues related to digital technology, including AI, and work to create a safer online environment for all Australians.  https://www.esafety.gov.au/

What is an example of AI Safety gone wrong

One example of AI safety gone wrong is Microsoft’s AI chatbot, Tay. Launched in March 2016, Tay was an AI-powered chatbot designed to engage in conversations with users on Twitter and learn from their interactions. The objective was for Tay to improve its conversational abilities by mimicking human-like responses.

However, within hours of its launch, Tay started posting offensive, racist, and inappropriate messages. This was due to the chatbot learning from its interactions with users who intentionally fed it harmful content. Microsoft had not implemented sufficient safety measures, such as content filtering or stricter learning mechanisms, to prevent Tay from adopting and reproducing such behaviour.

As a result, Microsoft had to take Tay offline within 24 hours of its launch. The Tay incident highlights the importance of AI safety measures, including robustness against adversarial inputs and value alignment with human ethics, to prevent AI systems from causing unintended harm or behaving undesirably.

Hern, A. (2016, March 24). Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. The Guardian. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter

What is an example of AI safety gone wrong in the UK?

An example of AI ethics concerns in the UK involves using an algorithm for determining A-level exam grades in 2020. Due to the COVID-19 pandemic, UK students could not take their A-level exams, which play a critical role in university admissions. In response, Ofqual, the UK’s Office of Qualifications and Examinations Regulation, developed an algorithm to predict students’ grades based on factors like their prior academic performance and the historical performance of their schools.

However, the algorithm was widely criticised for being unfair and biased. Students from disadvantaged backgrounds and lower-performing schools were disproportionately affected, as the algorithm tended to downgrade their predicted grades. This led to a public outcry, with students and families demanding a fairer approach to grading.

In response to the backlash, the UK government eventually scraped the algorithm-based grading system and relied on teacher-assessed grades instead. This incident highlights the importance of transparency, fairness, and accountability when developing and deploying AI systems, particularly when they significantly impact people’s lives.

Busby, E., & Crouch, H. (2020, August 17). A-level results: Government in humiliating U-turn as it finally ditches controversial algorithm for teacher-assessed grades. The Independent. https://www.independent.co.uk/news/education/education-news/a-level-results-algorithm-teacher-assessed-grades-gavin-williamson-ofqual-a9674611.html

What is ‘explainable AI’?

Explainable AI (XAI) refers to a subfield of artificial intelligence that focuses on developing AI systems and models that can provide human-understandable explanations for their decisions, predictions, or actions. The primary goal of XAI is to make AI more transparent, accountable, and trustworthy, addressing the so-called “black-box” problem where complex AI models, such as deep neural networks, can be difficult for humans to interpret and understand.

Explainable AI involves various techniques and approaches that help users comprehend why an AI system arrived at a specific output. Some standard methods include:

  1. Feature importance: Identifying and ranking the most critical input features contributing to the AI system’s decision.
  2. Local explanations: Providing explanations for specific instances or decisions, often by approximating the complex model with a simpler, more interpretable model.
  3. Global explanations: Offering a broader understanding of the AI system’s behaviour and decision-making process over various inputs
  4. Rule extraction: Deriving human-readable rules or decision trees from the AI model to help explain its decisions.

Explainable AI is critical in industries and applications where the consequences of AI decisions can have significant impacts, such as finance, healthcare, law, and self-driving vehicles. By providing better insight into the AI system’s functioning, XAI can help to build trust, facilitate collaboration between humans and AI, and ensure that AI-driven decisions are ethically and legally sound. XAI World Conference. (n.d.). XAI World Conference. Retrieved April 2, 2023, from https://xaiworldconference.com/

Black box AI refers to machine learning models that arrive at decisions or conclusions without explaining how they reached those decisions. These models are often too complex for experts to understand, making identifying and correcting errors or biases challenging. This lack of transparency can be problematic, particularly in high-stakes decision-making contexts such as healthcare, finance, and criminal justice. As a result, there is growing interest in developing explainable AI models that can provide clear and interpretable explanations for their decisions. These models are designed to be more transparent and accountable, allowing users to understand how the model arrived at its conclusions and identify potential biases or errors. The development of explainable AI is essential to ensure that AI is used ethically and responsibly and benefits society.

What are some short courses about AI and ethics?

  1. Governance, Ethics and Regulation of AI – UTS Open – This digital ethics and governance short course from UTS Open explores the use of AI in business and community contexts. It examines the laws, standards, and regulatory initiatives designed to protect users from digital hazards. https://open.uts.edu.au/uts-open/study-area/law/professional-skills/governance-ethics-and-regulation-of-ai/
  2. The Ethics of Artificial Intelligence – Melbourne MicroCert – This Melbourne MicroCert is ideal for leaders and digital professionals who want a better understanding of the opportunities and risks of AI and how these can impact organisations. https://study.unimelb.edu.au/find/microcredentials/introduction-to-the-ethics-of-artificial-intelligence/
  3. Australia’s AI Ethics Principles – Department of Industry – This voluntary framework outlines Australia’s AI ethics principles and guides organisations developing and implementing AI systems. It is not an online course, it is a publication, but it is worth studying  https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles
  4. Ethics of Artificial Intelligence | Coursera – This course teaches you to identify the ethical and social impacts and implications of AI, critically analyse current policies for AI, and use ethical and socially responsible principles in your professional life. [Link: https://www.coursera.org/learn/ai-ethics]
  5. Ethics of AI: Safeguarding Humanity | Professional Education – Led by MIT thought leaders, this course will deepen your understanding of AI as you examine machine bias and other ethical risks and assess your individual and corporate responsibilities. [Link: https://professional.mit.edu/course-catalog/ethics-ai-safeguarding-humanity]
  6. Artificial Intelligence Ethics in Action | Coursera – This course by LearnQuest is part of the Ethics in the Age of AI Specialisation. It focuses on analysing ethical AI across various topics and situations. https://www.coursera.org/learn/ai-ethics-analysis

AI writing statement: This blog post has been written with the assistance of Open AI Chat GPT 4 and https://www.perplexity.ai/. This includes drafting, text arrangement (dot points, steps etc.), search, and idea generation.  Approximately 50% of the final product of this blog post reflects these contributions.

Hike to Lake Tali Karng

Lake Tali Karng, nestled deep within the alpine region of Victoria, Australia is a serene and very deep lake in a rugged, bush landscape. Our adventure began near Licola a small town approximately 250 kilometres east of Melbourne. The town itself is a base for hikers and hunter and fisher types, with a general store, a caravan park, and a campsite (and not much else). After a long drive from Melbourne, my hiking buddies and I set off on the 20-kilometre Wellington Plains track, following the path through boggy open plains.

Wellington Plains

The first day of our hike was a moderate trek, perfect for warming up our legs for the more challenging terrain ahead. As we made our way through the expansive plains, we were greeted by weird Australian Jurassic wildflowers , their vibrant colours contrasting beautifully against the hues of the grasslands. The open skies and vast landscapes provided us with a sense of freedom from a world with too many digital screens. We set up camp, and ditched the heavy backpacks ready for the descent to the lake in the morning.

Camping at the top of the hill at Nyimba Campground,

On the second day, the terrain grew steeper, and the vegetation denser with each passing kilometre. Our reward for the strenuous ascent was the panoramic vista of the surrounding mountains and valleys periodically enveloped in clouds. The trail led us through eucalyptus forests, accompanied by the symphony of birdsong . As we navigated the rugged terrain, the first glimpse of Lake Tali Karng came into view .

Lake Tali Karng

The ancient lake is a sacred site for the indigenous Gunaikurnai people, the pristine waters and rugged landscapes are mystical and we felt immense reverence and awe.

We climbed the steep track back to the Nyimba Campground, and spent the night there before the trek over Wellington Plains and back to Licola in the morning.

Food!

Release of Chat GPT 4.

GPT-4, the fourth iteration of OpenAI’s Generative text system, has a fresh, multimodal language model. It was officially launched on March 14, 2023, and made accessible to ChatGPT Plus subscribers. Prior to its public release, Microsoft had already integrated GPT-4 into certain versions of Bing. As a model, GPT-4 underwent pre-training using a combination of public and licensed data, followed by fine-tuning through reinforcement learning guided by human feedback (this would be a horrible job).

OpenAI has refrained from disclosing the model size due to the competitive nature of the AI industry and potential safety concerns related to large-scale models. There were speculations that GPT-4’s parameters would increase dramatically from GPT-3’s 175 billion to 100 trillion, a claim that OpenAI CEO Sam Altman dismissed as false.

GPT-4 outperforms GPT-3.5 in terms of reliability, and the ability to process complex instructions. GPT-4 can work with up to 25,000 words of text, which marks a considerable advancement over previous versions. GPT-4 demonstrates significant accuracy improvements compared to GPT-3.5, developing the capacity to summarise and interpret images, and condense complex texts. However, it still occasionally generates bullshit!