Live

|

Webinar

Data in, Data out: How it shapes ChatGPT and us

We revisit ChatGPT and other generative AI systems with a webinar focusing on data, copyright and privacy.

Thu | Jul 13 |

15:00

CET

Thu July 13 2023 15:00:00 GMT+0200

Days

Hours

Minutes

Seconds

Register NOW
Relive this Webinar

About this Webinar

This month Start Talking will revisit ChatGPT and other generative AI systems with a two part webinar focusing on data, copyright and privacy.


Part One: Data In


We will look at the huge range of data that feeds generative AI training models. For creatives, how are their music, words and images used and what implications does this have for legal rights like copyright and for the future of the creative industry? For consumers - what does a world of computer generated culture look and feel like?


Part two: Data Out


Can the data coming out of these models be trusted? When does shaky advice become an infringement of consumer laws on misleading information? Is it possible to arm ourselves against the information deluge of large language models, or is action at the top of the pipeline needed?

Register NOW

Lodovico Benvenuti

Managing Director, IFPI European Office

See on LinkedIn

Calli Schroeder

Global Privacy Counsel at Electronic Privacy Information Center

See on LinkedIn

Elias Papadopoulos

Director of Policy, DOT.Europe

See on LinkedIn

StJohn Deakins

Founder and CEO of CitizenMe

See on LinkedIn

Christian D’Cunha

Data policy, privacy and cybersecurity, European Commission

See on LinkedIn

Top questions:

01

For creatives, how are their music, words and images used and what implications does this have for legal rights like copyright and for the future of the creative industry?

02

For consumers - what does a world of computer generated culture look and feel like?

03

Can the data coming out of these models be trusted?

04

When does shaky advice become an infringement of consumer laws on misleading information?

05

Is it possible to arm ourselves against the information deluge of large language models, or is action at the top of the pipeline needed?

Quotes

❛❛
❛❛

Data going in isn't representative, which will only amplify societal biases and preconceptions. These questions must be answered before determining what is data used for or how it should be regulated

❜❜

Christian D'Cunha

Data policy, privacy and cybersecurity at European Commission

❜❜
❛❛
❛❛

We have a responsibility to protect our artists, their image & personalities, so there's an element of concern. However, the industry adapted to disruptions & we've learned to embrace them positively. It's important to have future-proof partnerships with greater visibility into data

❜❜

Lodovico Benvenuti

Managing Director, IFPI European Office

❜❜
❛❛
❛❛

While this shakes me to the core, I do understand why consumers use it to search for information. However, there are some positive use cases: putting together formats, and structuring novels or court documents. It helps to create a level-playing field, but blind trust is wrong.

❜❜

Calli Schroeder

Electronic Privacy Information Center

❜❜
❛❛
❛❛

ChatGPT isn't an information database designed to provide factual information. It functions as a model to generate convincing responses. There is a way to reduce the margin of error but we can never eliminate it entirely.

❜❜

Elias Papadopoulos

Director of Policy, DOT.Europe

❜❜
❛❛
❛❛

As individuals, we are all creators. We all provide personal data into these models, without even being aware of it. In these early stages, these models are still crude, carrying potential harms and being virtually impossible to track.

❜❜

StJohn Deakins

Founder and CEOat CitizenMe

❜❜