How artificial intelligence can make it easier to execute inaccurate orders

Inteligencia Artificial

With artificial intelligence, you could expand the range of interaction possibilities with a wide range of everyday digital tools.

Today, if a voice assistant (such as Google, Siri, or Alexa) is asked to play music, it has to be in a specific, punctual order. For example, an order in the style "I want to listen to X song of X artist", should work perfectly; but if you opt for a statement posed as "I want to listen to rock music at random, but omitting the Beatles," the answer you get is probably not the one that's really expected.

The same case is true of search engines. On platforms with a database limited to their own content, such as Wikipedia or news portals, if you search for "rock bands other than The Beatles", results probably won't appear, as that phrase could hardly be within your records.

Continuing with the same example query, when performing a search with the same keywords, the results could one way or another lead to the expected response if you use a platform such as Google or another peer search engine. In these cases, the result is obtained based on the content of the websites indexed by each search engine and not by the platform's ability itself to recognize the question and filter the information.

Attacking Artificial Intelligence: AI's Security Vulnerability and What  Policymakers Can Do About It | Belfer Center for Science and International  Affairs

Support for inaccurate searches or negative statements, thanks to AI

This scenario raised concerns with computer scientist Simon Razniewski and his collaborators at the Saarbrucken Max-Planck Institute for Informatics. The research team is a pioneer in developing a method that allows to better manage the knowledge bases with which artificial intelligence platforms operate and thus provide these mechanisms with the ability to adapt to these contexts.

In the report detailing the development and conclusions of the work carried out by this team, they presented the model they propose.

The document uses Stephen Hawking as an example within a data sample. First, to perform this work, the common elements between the members of this sample are tracked to generate a first grouping criterion. In this case, the categorization it applies is that of 'physical'. These comparison cases, under the jargon of researchers, were referred to as "pairs".

Depending on the 'pairs', characteristic elements are identified under positive statements. Taking up the example, since physicist Albert Einstein and Richard Feynman won the Nobel Prize, the assumption could be made that Stephen Hawking won the Nobel Prize.

Assumptions are then tested based on the data identified in the pre-crawl. If a statement applies to a "pair" but not to the search object, the investigators determined that the statement in question is negative for the search object. In other words, "Stephen Hawking never won the Nobel Prize." To determine the relevance of the negative statements generated, they are sorted using several parameters, for example, how often they were presented within a group of pairs.

Only by retouching how information is managed can a new opportunity be opened up to improve the experience of using certain platforms, such as those mentioned at the beginning of this note and even other applications granted by the characteristic versatility of this prominent technology.

As is often the case with these announcements, the shared report responds to a scientific demonstration of recent gains, which, if properly "matured," could be applied more visibly.

Post a Comment

0 Comments