Tech companies train AI on everyone's data and then turn around and produce proprietary models they plan to use as a source of profit. These models wouldn't be possible without the collective creative output of humanity itself.

It's like they're robbing from the commons. Sticking humanity's cultural heritage in a conputer and fencing off the result.

That half of what they steal has alread been denied to the public due to century long copyright terms is just icing on the cake.

MC

@jeck@dice.camp

AI has a strong "financial moat" in the form of the processing power required to train and later run these models, which is why you see companies sharing research, techniques, sometimes even models themselves.

As that moat erodes they'll inevitably turn toward protecting their source of profit. That means closing access and pushing legislation that prevents the rise competition.

January 4, 2023 at 6:03:15 PM

The simplest method of ensuring tech corporation profit is to push for copyright enforcement laws on training data but grandfather in existing models, making it impossible for new models to be trained.

Even if there's no grandfathering clause, the vast majority of creative copyright is held not in artists hands but in huge corporations like Disney, who use their monopsony powers to do things like outright refuse to pay the artists they hire.

Elk Logo

Elk is in Preview!

Thanks for your interest in trying out Elk, our work-in-progress Mastodon web client!

Expect some bugs and missing features here and there. we are working hard on the development and improving it over time.

Elk is Open Source. If you'd like to help with testing, giving feedback, or contributing, reach out to us on GitHub and get involved.

To boost development, you can sponsor the Team through GitHub Sponsors. We hope you enjoy Elk!

PatakAnthony FuDaniel Roe三咲智子 Kevin Deng

The Elk Team