Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the issue they're mainly worried about might be exemplified with a prompt of 'my little pony'. A children's show with quite a lot of adult imagery associated with it on the internet.

A child entering this prompt is probably expecting one thing, but the internet is filled with pictures of another nature. There are possibly more adult 'my little pony' images than screenshots of the show on the internet.

Did the researchers manage to filter out these images before training? Or is the model aware of both 'kinds' of 'my little pony' images? If the researchers aren't sure they got rid of all of the adult content, then there's really no way to guarantee the model isn't about to ruin some oblivious person's day.

So then, do you require people generating images to be intricately familiar with the training dataset? Or do you attempt to prevent any kind of surprise like this by just blocking 'unexpected' interactions like this?



A child entering this prompt is probably expecting one thing, but the internet is filled with pictures of another nature. There are possibly more adult 'my little pony' images than screenshots of the show on the internet.

So everyone has to have gimpy AI just because parents can't be expected to take responsibility for what their child does and does not see? Why the fuck is a child being allowed to play with something that can very easily spit out salacious images accidentally? Wouldn't it be significantly easier to add censorship to the prompt input instead? It seems like these tech companies see yet another opportunity to add censorship to their products and can hardly hide their giddy excitement.


Just to be clear, the child was just an example of someone who could theoretically experience 'cruel' treatment from the current version of stable diffusion. I'm absolutely not recommending people let their children use the model unsupervised. It doesn't have to be a parenting problem, though.

The same could be said (for example) of a random mother trying to get inspiration for a 'my little pony' birthday cake for their child, and being presented with the 'other' kind of image unintentionally, without their consent. I think they would be justifiably upset in that situation.

If we were to imagine someone attempting to put stable diffusion into some future consumer product, I think they would have to be concerned about these kinds of scenarios. Therefore, the scientists are trying to figure out how to accomplish the filtering.

FWIW, I don't think a model could be made that actively prevented people from using their own NSFW training data. The only difference in the future will be that the public models won't be able to do it 'for free' with no modifications needed. You'll have to train your own model, or wait for someone else to train one.


Interestingly, better results might be achieved by exposing the model to a large corpus of apropriately tagged NSFW data, so that the prompt may exclude it. I imagine img2img could also make an image SFW, or vice-versa. I'd be curious to know what kind of alterations it would make.


I would recommend looking more closely at the article.

Stability.ai, the company who developed and released the model being discussed, have not added a safety filter to the model. As the article points out, the filter is specifically implemented by HuggingFace's Diffusers library, which is a popular library for working with diffusion models (but again, to be clear, not the only option for using Stable Diffusion). The library is also open source, and turning off the safety filter would be trivial if you felt compelled to do so.

So, "these tech companies" aren't overcome by glee over censoring you. One company implemented one filter in one open source and easily editable library.


But they did. The line of code is literally in the CompVis release and you can disable it.


> because parents can't be expected to take responsibility for what their child does and does not see?

This is an opinion you could only have if you’ve never raised or even spent time around children.

How would your parents have prevented you from unsupervised access? Do you think you’d have gone along with restrictions?


Because in aggregate, children seeing those things has impacts on society.

Like sure would it be better if parents monitored their children’s 4chan use? Ofc.

Is that at all a practical approach to eliminating Elliot Roger idolization? No.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: