wantla.blogg.se

In in french google translate
In in french google translate





“If we’re serious about having artificial intelligence make decisions that don’t end up with biases that we don’t want to reinforce, we need to have more diverse and critical people looking at this earlier on in the process.MOSCOW - On a warm, sticky morning in Kazan, Russia, a couple of weeks ago, a British journalist decided to brave the heat and go for a run. “But if we were more deliberate about this and took things more seriously, we’d do more work to integrate a more critical perspective.”Ĭompanies who are using the word-embedding model to make services for consumers also need more diverse programmers who are more likely to notice the risk of biases before they crop up. “The default now is to build these applications and release them into the wild and fight the fires when they come out,” she adds. McCurdy says that there isn’t anything necessarily wrong with the word-embedding model itself, but it needs human guidance and oversight. The price of learning from reams of existing text and dialogue is that such models pick up the true-to-life imbalance between genders when it comes to jobs or opportunities.Ī 2016 study that trained word-embedding models on articles on Google showed gender stereotypes “to a disturbing extent,” according to its researchers. By looking at what words tend to be around other words, like “engineer,” the model can be used to figure out what other word fits best, like “he.” Word embedding works by linking words to a vector of numbers, which algorithms can use to calculate probability. “This is an approach that has taken off and is extremely widespread in the industry, and that’s why it’s so important to interrogate the underlying assumption,” says McCurdy.







In in french google translate