Noah Brier | October 23, 2024

The AI Data Paranoia Edition

On terms of service, the Barnum effect, and Fingerspitzengefühl.

The below is a WITI-minded adaptation of an excellent dispatch Noah sent out recently. We wanted you to be able to enjoy it too. – LC and CJN.

Noah here. Since ChatGPT came out, there’s been concern about the data we input being included in training. Some of this had to do with OpenAI’s consumer terms of service, which does allow them to train on the data you input. But from the beginning, the terms on the API, and now teams and enterprise accounts, have been clear that anything you put in is actually off-limits for training. Here’s the pertinent paragraph from their Business Terms:

3.2 Our Obligations for Customer Content. We will process and store Customer Content in accordance with our Enterprise privacy commitments. We will only use Customer Content as necessary to provide you with the Services, comply with applicable law, and enforce OpenAI Policies. We will not use Customer Content to develop or improve the Services.

Anthropic—another AI company—has fairly similar terms, though their consumer agreement says they won’t train on your data unless you provide them feedback (thumbs up/down), or your data is flagged for safety reasons.

Why is this interesting?

Because I keep having arguments with lawyers at brands and agencies about these services training on their data.

I have a few thoughts on why this is happening: 

Read more

© WITI Industries, LLC.