Since the beginning of 2024, the demand for the content created by the Wikimedia volunteer community – especially for the 144 million images, videos, and other files on Wikimedia Commons – has grow…
They can also crawl this publically-accessible social media source for their data sets.
I’m on board with abandoning mainstream social media, but my point is that your suggestion would not solve the problem just relocate it. A better solution to the AI conglomerates stealing everyone’s data from the open Internet is legislation and regulations - ie tackling the whole ‘stealing data’ component, along with stronger privacy regulations for everyone to make it harder for them to do the same in the future. It’s nice seeing the EU taking some positive steps, but we will not see the US take any steps in that direction anytime soon, due to corporate capture of their politicians and the AI companies all being in the top 10 most wealthy companies in the US.
They can also crawl this publically-accessible social media source for their data sets.
Crawling would be silly. They can simply setup a lemmy node and subscribe to every other server. Activitypub crawler would be much more efficient as they wouldn’t accidentally crawl things that haven’t changed, but instead can read the activitypub updates.
Sure but we’re in the comments section of an article about wikipedia being crawled, which is silly because they could just download a snapshot of wikipedia
So, uh. What about Lemmy?
They can also crawl this publically-accessible social media source for their data sets.
I’m on board with abandoning mainstream social media, but my point is that your suggestion would not solve the problem just relocate it. A better solution to the AI conglomerates stealing everyone’s data from the open Internet is legislation and regulations - ie tackling the whole ‘stealing data’ component, along with stronger privacy regulations for everyone to make it harder for them to do the same in the future. It’s nice seeing the EU taking some positive steps, but we will not see the US take any steps in that direction anytime soon, due to corporate capture of their politicians and the AI companies all being in the top 10 most wealthy companies in the US.
Yet they helped introducing the super cookies and are trying to end encryption on communications.
Crawling would be silly. They can simply setup a lemmy node and subscribe to every other server. Activitypub crawler would be much more efficient as they wouldn’t accidentally crawl things that haven’t changed, but instead can read the activitypub updates.
Sure but we’re in the comments section of an article about wikipedia being crawled, which is silly because they could just download a snapshot of wikipedia