AUSTIN (KXAN) -- KXAN viewers helped shape our newsroom's guidance related to using and disclosing our use of artificial intelligence tools.
KXAN was part of a cohort of 10 newsrooms that partnered with Trusting News earlier this year on a survey and deep-dive interviews to better understand our audience's comfort level with artificial intelligence and journalism. Trusting News is an organization that works to "inspire and empower journalists to evolve their practices in order to actively earn trust," according to its website.
The results of KXAN's survey mirrored those of the cohort, which showed an overwhelming number want:
The survey was conducted using Google Forms and ran from July to August 2024.
Explore the results of KXAN's online survey of 515 people below:
KXAN also conducted 10 in-depth interviews with people who responded to our survey to better understand their ideas, concerns and questions surrounding AI and news.
Those survey results show that we and our audience are on the same page when it comes to ensuring we are providing factual and fair coverage of our communities. It also propelled us to outline and publish guidance on our approach to AI to maintain transparency. Read the guidance below or find it on our kxan.com/ai page.
KXAN's approach to AI
KXAN works to be innovative and knowledgeable about the latest technologies, including artificial intelligence tools, to serve two purposes:
When experimenting and using AI tools, we are committed to verifying all content for accuracy and to make sure it meets our ethical standards before airing or publishing. All tools must also be vetted by our parent company Nexstar's AI council.
Every story, whether an AI tool was involved or not, must always be reviewed by at least one other person before it is published on our site or airs in our broadcasts.
We are committed to transparency, accuracy and serving you - our audience. As we experiment and use these tools, we welcome your feedback and thoughts. You can email Digital Director Kate Winkle at [email protected]. If you have any questions or story ideas related to AI, email [email protected].
Artificial intelligence tools have been around for a while -- they help power chatbots for customer service and virtual assistants like Apple's Siri or Google's Assistant or Amazon's Alexa, enable phones to offer predictive text and create auto-generated captions on YouTube and Facebook. More recently a branch of AI called generative AI has come to the forefront. Generative AI can produce new content in response to prompts.
Simplistically, generative AI works by giving an algorithm a large amount of data -- whether text, image or video -- that it uses as examples of what it should create. Other algorithms guide it to then produce something when asked. Most generative AI tools respond to prompts that users input, and then will output text, images or video. It uses the data it was trained on to produce what it thinks the user is asking for. These outputs have become more sophisticated over the past few years, but they are fallible and may include errors or incorrect information. IBM Research has an overview online of the technology, and other tools like OpenAI's ChatGPT, Microsoft's Copilot and Google's Gemini also share some insight into their methodology online.
Disclosing use of AI tools
More than 500 people responded to a survey KXAN launched in partnership with Trusting News and 9 other newsrooms to understand the types of AI tools community members are comfortable with and gain insight into how they want to know if it was used. People overwhelmingly wanted any AI use in our journalism to be disclosed, and a majority wanted details about its use.