Мы используем файлы cookie.
Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
Imageability
Другие языки:

    Imageability

    Подписчиков: 0, рейтинг: 0

    Imageability is a measure of how easily a physical object, word or environment will evoke a clear mental image in the mind of any person observing it. It is used in architecture and city planning, in psycholinguistics, and in automated computer vision research. In automated image recognition, training models to connect images with concepts that have low imageability can lead to biased and harmful results.

    History and components

    Kevin A. Lynch first introduced the term, "imageability" in his 1960 book, The Image of the City. In the book, Lynch argues cities contain a key set of physical elements that people use to understand the environment, orient themselves inside of it, and assign it meaning.

    Lynch argues the five key elements that impact the imageability of a city are Paths, Edges, Districts, Nodes, and Landmarks.

    • Paths: channels in which people travel. Examples: streets, sidewalks, trails, canals, railroads.
    • Edges: objects that form boundaries around space. Examples: walls, buildings, shoreline, curbstone, streets ,and overpasses.
    • Districts: medium to large areas people can enter into and out of that have a common set of identifiable characteristics.
    • Nodes: large areas people can enter, that serve as the foci of the city, neighborhood, district, etc.
    • Landmarks: memorable points of reference people cannot enter into. Examples: signs, mountains and public art.

    In 1914, half a century before The Image of the City was published, Paul Stern discussed a concept similar to imageability in the context of art. Stern, in Susan Langer's Reflections on Art, names the attribute that describes how vividly and intensely an artistic object could be experienced apparency.

    In computer vision

    Automated image recognition was developed by using machine learning to find patterns in large, annotated datasets of photographs, like ImageNet. Images in ImageNet are labelled using concepts in WordNet. Concepts that are easily expressed verbally, like "early", are seen as less "imageable" than nouns referring to physical objects like "leaf". Training AI models to associate concepts with low imageability with specific images can lead to problematic bias in image recognition algorithms. This has particularly been critiqued as it relates to the "person" category of WordNet and therefore also ImageNet. Trevor Pagan and Kate Crawford demonstrated in their essay "Excavating AI" and their art project ImageNet Roulette how this leads to photos of ordinary people being labelled by AI systems as "terrorists" or "sex offenders".

    Images in datasets are often labelled as having a certain level of imageability. As described by Kaiyu Yang, Fei-Fei Li and co-authors, this is often done following criteria from Allan Paivio and collaborators' 1968 psycholinguistic study of nouns. Yang el.al. write that dataset annotators tasked with labelling imageability "see a list of words and rate each word on a 1-7 scale from 'low imagery' to 'high imagery'.

    To avoid biased or harmful image recognition and image generation, Yang et.al. recommend not training vision recognition models on concepts with low imageability, especially when the concepts are offensive (such as sexual or racial slurs) or sensitive (their examples for this category include "orphan", "separatist", "Anglo-Saxon" and "crossover voter"). Even "safe" concepts with low imageability, like "great-niece" or "vegetarian" can lead to misleading results and should be avoided.

    See also

    Further reading


    Новое сообщение