A multidisciplinary team aims to build a more inclusive AI shaped by global cultures and knowledge – one of three projects that make up Cornell’s new Global Grand Challenge: The Future.
Cornell AI News
News Category
Filter by Topic
Reducing the cultural bias of AI with one sentence
“Cultural prompting” – asking an AI model to perform a task like someone from another part of the world – resulted in reduced bias in responses for the vast majority of the more than 100 countries tested by a Cornell-led research group.
Brevity is money when using AI for data analysis
A new computational system called Schemonic, developed by Cornell researchers, cuts the costs of using large language models such as ChatGPT and Google Bard by combing large datasets and generating what amounts to “CliffsNotes” versions of data
Leading the Charge in Cybersecurity, Trust, and Safety
In an era where digital threats are ever-evolving, the need for advanced education and research in cybersecurity, trust, and safety is paramount. Cornell Tech’s new Security, Trust, and Safety (SETS) Initiative, a cutting-edge program aimed at revolutionizing these fields, aims to address these challenges head-on. The director of the SETS program, Google alum Alex
Redesigning videoconferencing for, and by, people who stutter
New research and an app aim to make Zoom and other video conferencing platforms less stressful for people with speech diversities, while improving the experience for everyone.
Rising Star Ben Laufer: Improving Accountability and Trustworthiness in AI
With artificial intelligence increasingly integrated into our daily lives, one of the most pressing concerns about this emerging technology is ensuring that the new innovations being developed consider their impact on individuals from different backgrounds and communities. The work of researchers like Cornell Tech PhD student Ben Laufer is critical for understanding the social and ethical implications of algorithmic decision-making.
Employees prefer human oversight to AI surveillance – unless the technology can be framed as supporting their development, new Cornell research finds.
Employees prefer human oversight to AI surveillance – unless the technology can be framed as supporting their development, new Cornell research finds.
Successful Artificial Intelligence Event Inspires Large Audience on May 29
The Emerging Tech Dialogues event on May 29, 2024 — the first in a new series — drew more than 750 registrations from Cornell, Weill Cornell Medicine, and Cornell Tech faculty, staff, students, and researchers — all interested in exploring Artificial Intelligence in Higher Education, the symposium’s theme.