Learning to look: Pulsar partners with Manchester School of Arts to understand the difference between how machines and humans look at images
The social data industry does not often cross over with academia, but a new funded PhD, Learning to Look, offers the chance for academia to partner with industry for a unique chance to understand how humans and machines look at images, and help them both to learn from one another on how to analyze this.
Pulsar’s co-founder, Francesco D’Orazio says: “With the amount of images being produced and shared every second, the very idea of ‘looking’ has changed and we’ve brought in tech to help us extend our sense. As humans we’re really good at adapting to new technologies to the point where they quickly become invisible and essential. This is particularly true with AI and whilst for us at Pulsar it’s always been crucial to be at the forefront of experimentation with deep learning and image analysis, I think it’s important to take a step back and assess how this technology is redefining our most important sense and how it’s shaping the way we see the world”.
Since the Visual Social Media Lab was founded in 2014, Pulsar has contributed to studies on the journey of an image, Alyan Kurdi and how images are used as conversation and more. Now the platform will be used in the new researched PhD study ‘Learning to Look’, working with the Visual Social Media Lab at Manchester School of Arts.
Learning to Look proposes to conduct innovative comparative image research, aiming to critically compare results of human and AI-supported analysis side by side and will explore the current gap between the social data industry and the research that can be done using visual social media insights, comparing it to traditional academic and research methodologies. The project’s key goal is to conduct a comparative analysis of a large corpus of social media images using methods commonly used in the arts and humanities, such as content analysis, alongside AI approaches supplied by Pulsar.
Despite the exponential growth in image sharing via social media in the past few years, academic research currently struggles to break down large datasets of images, due to the sheer scale and the extensive work that is needed to analyze each image. Combining industry methodology, such as using machine learning to identify concepts within images, with human academic methods offers a solution, but it is currently a work in progress.
Image analysis at Pulsar
Since 2015 we have made considerable investments in AI, first by launching Pulsar Vision (in December 2015), a deep learning solution to help users of the platform make sense of social media images. Shortly after that we updated this capability by introducing ‘Modules’ (June 2016) which allows users to select specific AI models to analyse a dataset, depending on subject, industry and the nature of the data being analysed. Pulsar Modules includes concept tagging, emotion analysis and image text extraction, with many more domain specific AI modules to be released next month.
In the next AI Modules release we’ll introduce vertical AI, including apparel, travel and food recognition as well as video analysis, colour analysis, celebrities recognition, demographics analysis (age, gender, ethnicity), and logos detection. The algorithm initially identifies and applies concepts to 100,000 images; it then analyzes the tags that have been identified and clusters them accordingly. This process is then checked and verified by a human for implementation into production.
Human research meets machine learning
The project we are about to kick off with Manchester University will be based on an innovative research design to compare the results of human and AI understanding of a large image dataset side by side, and will allow both the human research methodologies and the use of AI to learn from one another.
The results will enable a better understanding of the strengths and weaknesses of human and AI driven image analysis as well as to also better understand the results themselves, and show how academia and industry methods can work side by side, and learn from one another to improve analysis.
Farida Vis, a professor at Manchester School of Art who is supervising the study, says: “The Learning to Look project is different in that sense as it's trying to do something new, it is trying to better understand how humans and machines can best work together in the context of AI. It therefore will develop approaches that allow us to make these kinds of comparisons: what happens if we take say 3,000 images (or more) and let them be tagged by a human. What do they find? What kinds of clusters can be produced from that? How does this compare to what the machine can do? What could the machine do with the carefully annotated data from the human? What we're interested in is constantly making these side by side comparisons and to gain insight that will help both academia and industry. The other two projects purely focus on insight derived from humans.”
PhD Applications are still open to join the Learning to Look study, more details are available on the Manchester Metropolitan University site