Pennsylvania's OpenAI deal shows US authorities are flocking to generative AI, but privacy concerns still linger

OpenAI and ChatGPT logos displayed on a smartphone
(Image credit: Getty Images)

Pennsylvania will be the first US state to use OpenAI’s ChatGPT Enterprise as part of a trial of AI tools for state employees, as states and cities across the US warm to the use of generative AI.

The pilot is Pennsylvania’s first use of generative AI tools for government employees, state officials said. The findings will help guide both the wider integration of this technology into its processes, and could act as a compelling use case for broader roll-outs across the country.

OpenAI said the pilot program will help state employees explore use cases for generative AI but Pennsylvanians won’t find themselves interacting directly with ChatGPT when they deal with the state government.

The pilot begins this month and will see workers at the state’s Office of Administration use the tool for tasks such as creating and editing copy, making outdated policy language more accessible, drafting job descriptions, managing duplication within the hundreds of thousands of pages of employee policies, and generating code. 

Data will not be shared between state agencies, and workers are not allowed to use any sensitive information – including any personally identifiable information – when utilizing ChatGPT, officials said. An additional 100 licenses will eventually be available to other state employees for shorter periods of time after initial feedback and findings.

OpenAI eyes broader enterprise adoption

ChatGPT Enterprise launched in August 2023 and offers stronger security and privacy, unlimited higher-speed GPT-4 access, and longer context windows for processing longer inputs compared to the consumer version.

OpenAI lists early users of ChatGPT Enterprise as including Block, Canva, Carlyle, The Estée Lauder Companies, PwC, and Zapier. OpenAI does not use the business data or conversations to train the AI and all conversations are encrypted in transit and at rest. 

“Our goal with the pilot is to work closely with a small number of employees to figure out where we can have the greatest impact using generative AI tools,” said Office of Administration Secretary Neil Weaver.

“Their input will help us understand the practical applications of generative AI in their daily work and how we can best support our workforce as the technology becomes more widespread.”

US states are flocking to generative AI tools

Pennsylvania might be the first state to try out ChatGPT Enterprise, but cities and states across the US – as well as the federal government and others internationally – are trying to figure out the best approach to the rise of generative AI.

It’s proving to be a tricky balancing act for many. On the one hand, these tools can help automate and streamline administrative tasks, saving time and money. But AI tools might also incorporate hidden bias into decision-making, pose a risk to jobs, and raise serious questions over privacy and security.

In the private sector, concerns over data privacy have been a key talking point when discussing the use of generative AI tools. Last year, a host of major organizations including Apple and Amazon banned the use of ChatGPT due to the risk of internal data being leaked.

US states exploring the use of the technology are setting robust guidelines around what AI should be used for, however.

In September, Pennsylvania’s governor Josh Shapiro signed an executive order that said the state should be at the forefront of harnessing generative AI, but also warned the state should use generative AI in a “human-centered and equitable manner testing for and protecting against bias so that its use does not favor or disadvantage any demographic group over others.” 

RELATED RESOURCE

Insights from IBM on AI and automation

(Image credit: IBM)

Discover why connecting data across operations and IT environments will be key to business success

DOWNLOAD NOW

It said the design, development, procurement, and deployment of generative AI must not adversely affect the privacy rights of users.

In December 2023, the city of San Francisco published its own guidelines for staff who were using generative AI tools.

The guidelines explain that Generative AI generates new data based on patterns learned from existing data and can produce content that mimics human creativity. These guidelines also came with a warning, however, with officials specifically highlighting concerns around responsible use and the potential for bias. .

“Generative AI differs from AI technology currently in use by the City, which supports informed decisions based on input data but does not create new content. Generative AI offers new opportunities and also poses unique challenges to ensure responsible and effective use,” the guidance states.

“These tools use large sets of data culled from the internet to produce new content quickly. Because data from the internet can be subject to gender, racial, and political biases and inaccurate information, there is potential for AI-generated content to reproduce biases and misinformation.”

The San Francisco guidelines encourage staff experiment with generative AI tools for drafting and formatting text and explanatory images using public information.

It said workers should thoroughly review and fact check all AI-generated content such as text, code, or images. They should also disclose when and how generative AI was used in their output.

The guidance further added that workers should not enter into public generative AI tools any information that cannot be fully released to the public. Staff were told not to ask these tools to find facts or make decisions without expert human review, or generate images, audio, or video that could be mistaken for real people.

They should also not conceal use of generative AI during interaction with colleagues or the public, such as tools that may be listening and transcribing the conversation or tools that provide simultaneous translation.

Just this week, New York state unveiled its own ‘Acceptable Use of Artificial Intelligence Technologies’ policy to cover the broader use of systems that use machine learning, large language models, natural language processing, and computer vision technologies - including generative AI.

It said that responsible use of AI can drive innovation, increase operational efficiencies, and better serve New Yorkers - while protecting privacy, managing risk, and promoting accountability, safety, and equity.

Like the other state guidelines, keeping humans in the loop is one of the key considerations. It said state agencies and contractors must ensure that decisions that impact the public are not made without oversight by appropriate staff, who make the final decisions.

“Automated final decision systems are not permitted,” it said. Agencies must ensure that where AI systems are used to aid in decision making that impacts the public, the outcomes, decisions, and supporting methodologies of such AI systems are documented, it said.

Cities and states are keen to promote themselves as friendly to AI startups and the potential of the technology. Along with its new guidance, New York is also investing $400 million in AI research. Tackling all these competing demands is going to be hard – too hard perhaps even for generative AI to figure out alone.

Steve Ranger

Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of silicon.com.