BVI Research

BVI research collaborations 

Built on the belief that interdisciplinary is central to future developments in this field, BVI brings together researchers from science, engineering, medicine and the creative arts with the aim of addressing grand challenges in vision research.

BVI is proud to host the only current EPSRC Platform Grant in Vision Science and Engineering - Vision for the Future.

Since its inception, BVI has been the umbrella for many new research activities, several examples of which are described below.


Perceptual Video Compression 

Visual experiences are key drivers, not just for the entertainment sector but also for business, security and communications technologies. Cisco predicts that video will account for over 80% of all internet traffic in 2019 with total annual IP traffic rising to 2 zettabytes (1 zettabyte = 1021 bytes). Mobile network operators predict a doubling of wireless traffic every year for the next 10 years driven primarily by video. Mobile video access continues to rise by 100% y/y. These factors place increased demands on communication networks, in particular those that support wireless access, and focus the need for transformational approaches to video compression. In most cases, the target of video compression is to provide good subjective quality rather than to simply produce the most similar pictures to the originals. Based on this assumption, BVI researchers have conceived a compression scheme where an analysis/synthesis framework is employed, rather than the conventional energy minimisation approach. This so-called parametric coding method employs a perspective motion model to warp static textures and utilises texture synthesis to create dynamic textures, similar to techniques used in computer graphics. The new algorithm has been integrated into a full video coding framework and results show significant bitrate savings, of up to 60% at the same objective quality point. This work has spawned a major EU funded international collaboration (Provision) between Bristol, HHI Fraunhofer Berlin, The University of Nantes, The University of Aachen, BBC Research, Google Youtube, and Netflix to devise a future generation of video coding standards. 

Dave Bull, Aaron Zhang


Measuring visual engagement

The visual world can be captivating: the sunrise over an ancient city, the spectacle of a high speed car chase, the tension as an interviewer cross-examines a slippery politician. For individuals and organisations making visual content (including television and films) an important question is: What makes some material engaging and some not? In Bristol we have developed a suite of behavioural methods to measure this visual engagement and have carefully and rigorously validated them to allow us to compare different cuts of a film or different display technologies. This allows us to maximise visual immersion and engagement and in turn deliver the best visual experience for the user.

Iain Gilchrist, Steve Hinde 


Estimating perceptual visual quality 

Assessing the perceptual quality of an image or video distorted by noise is one of the most critical yet challenging tasks in image and video processing. Visual perception is highly complex, influenced by many confounding factors, not fully understood and di cult to model. For these reasons, the characterisation of noisy imagery (e.g. after video compression) has invariably been based on subjective assessments where a group of viewers are asked their opinions on quality under a range of test conditions. Traditional objective measures of video quality are usually computed based on some distance measure between the noisy version of a picture and its original version. It is however well known that the perceptual distortion experienced by the human viewer cannot be fully characterised using such simple mathematical differences. Because of the limitations of distortion-based measures, perception-based metrics have begun to replace them, offering the potential for enhanced correlation with subjective opinions. In this context Aaron Zhang and David Bull in BVI have developed a new perceptual metric called PVM. The Perception-based Video quality Metric (PVM) simulates perception processes by adaptively combining noticeable distortions with measures of typical suprathreshold aretacts such as blurring using an enhanced non-linear model. Importantly PVM offers the better correlation with human opinions, across the widest range of test content, than any of its competitors. It is also lower complexity, making it more suitable for real time decision making in applications such as video compression.

Aaron Zhang, Dimitris Agrafiotis, Dave Bull


Colour in cinema 

We take it for granted that films are in made in colour and that they broadly reflect the world we see. Yet colour films dominated film production only in the silent era (1894-1929) and then not until the late 1960s onwards. Several projects led by Prof. Sarah Street in the Department of Film and Television research the nature and impact of colour filmmaking in British, American and European cinema. They investigate how colour films were made, from the application of colour by hand, stencil or applied tinting and toning methods that characterized the silent era, to photochemical processes such as Technicolor, and the monopack stocks that enabled colour films to dominate sound cinema. Analysing the production and reception of a great variety of films made in different global contexts reveals that colour perception, understanding and appreciation is a profoundly cultural phenomenon influenced by prevailing aesthetic norms, national taste cultures and generic application. While new technologies often claim new capabilities, to a great extent today’s digital colour films follow the aesthetic conventions of past approaches to colour filmmaking. The research projects also have links with film restorers and those concerned with the preservation of our colour film heritage.

Sarah Street


Follow your eyes

Humans move their eyes about 3 times a second. The eyes move to point at objects that are of interest and then quickly move on. The eyes move because visual ability is very good in the central part of vision (the fovea) and falls dramatically away from the current point of fixation. In some real sense we can only really see what we are currently looking at. When the eyes are stationary on an object, visual information about that object is gathered. However, at the same time the limited vision away from where we are currently looking is used to decide where to look next. Bristol has a long tradition of recording eye movements to understand visual behaviour. Understanding this Active Vision process is central to our fundamental understanding of human vision as well as allowing us to scale up our knowledge to address more complex applied visual problems.

Iain Gilchrist


Vision for autonomous locomotion

Numerous scenarios exist where it is necessary or advantageous to classify surface material at a distance from a moving forward-facing camera. These include the use of image based sensors for assessing and predicting terrain type in association with the control or navigation of autonomous vehicles or robots. In the real world, the upcoming terrain may not just be at but might sloping; it may also be slippery or rough or present other characteristics that would cause a vehicle to change speed or direction in order to ensure safe and smooth motion. Work in BVI’s Visual Information Laboratory has produced an integrated framework to solve this problem. It specifically addressed issues such as motion blur which can reduce the performance of a terrain classifier, where robust texture features have been developed to deal with this problem. The researchers have also produced a novel algorithm for terrain type classification based on monocular video captured from the viewpoint of human locomotion. This is particularly important for biped robots and takes account of gait where probabilities of path consistency are employed to improve terrain-type estimation. The gure shows the terrain classification for tracked regions, where green, red and blue correspond to the areas classified as hard and soft surfaces and unwalkable areas, respectively. The terrain gradient also influences the speed and power of a vehicle traversing it. A novel texture-based method for estimating the orientation of planar surfaces under the basic assumption of homogeneity has therefore been developed for single image sensors.

Jeremy Burn, Dave Bull, Pui Anantrasirichai, Iain Gilchrist


Visual disorders and retinal development 

Our vision does not depend solely on the simple detection of different patterns and wavelengths (colours) of light. Our eyes sense the environment around us and our brain interprets this information to make judgements about the nature of objects, their position in space, movement, significance and familiarity to us.

The human eye is designed to focus light onto the retina.  The light-sensitive cells of the retina are rod photoreceptors, responsible for night vision, and cone photoreceptors, responsible for colour vision and reading. Photoreceptors convey information to nerve cells in the retina that are much like nerve cells in the rest of the brain. Regarded as part of our central nervous systems, the neural circuitry of the retina provides some insight into how the brain works.

The Human Genome Project has informed scientists about the DNA sequences that instruct the developing embryo to form a normal visual system.  We also know about some of the differences between individuals in their DNA code that influence normal variations, such as iris colour and refractive error (short or long-sightedness). An individual’s DNA can now be sequenced to work out their future risk of eye disease or the cause of an inherited visual disorder.

Dr Denize Atan and the Bristol Vision Institute are interested in working out the identity and function of genes that influence the normal development of our eyes and visual system, and what happens to our vision when these genes are faulty. This research is particularly focused on understanding how the neural circuitry of the retina is wired together and what this can tell us about the circuitry of the rest of the brain. By taking a molecular approach and looking inside cells and our DNA to identify the processes that influence our vision of the world, this research hopes to further our understanding of retinal development.

Denize Atan, Cathy Williams


Visual biometrics

Research in BVI led by Dr Tilo Burghardt, aims at providing non-invasive solutions to problems in field biology - to better understand and conserve endangered species. Specifically the team has developed approaches to facilitate remote monitoring and identification of individual animals in large populations using techniques form computer vision techniques and human biometrics. Tilo’s original work was centred around the African penguin (Spheniscus demersus) where he developed, with Leverhulme Trust funding, in collaboration with the Animal Demography Unit at the University of Cape Town and Bristol Zoo Gardens, an autonomous system capable of monitoring and recognising individual penguins in their natural environment without tagging or otherwise disturbing the animals. Similar approaches have been used to monitor other endangered species, most recently great white sharks. In this case the biometric is based on the characteristics of the shark’s n. The approach works robustly in extremely difficult circumstances, dealing with different scales, viewpoints and occlusions.

Tilo Burghardt


Camouflage in nature and war and Biological motion and coloration

Camouflage, whether the product of technology or evolution, is as much an adaptation to the perception and mind of the viewer as it is to the environment. In nature and war, concealment may be necessary against a foe with infra-red, ultra-violet or polarization vision, at ultra-high spatial or temporal resolution, and perhaps with hyperspectral colour dimensionality. Understanding evolution’s responses to the problems of defeating detection and identification by such foes, and how the human mind segments, recognises and tracks targets when those targets resemble the background, are core intellectual challenges for www.camolab.com’s biologists, psychologists and computer scientists. Military and animal coloration must often satisfy other constraints, in terms of recognisability and physical robustness. Understanding these trade-offs offers biologically inspired solutions, underpinned by theory, for optimising concealment (or conspicuity) in military and civilian contexts.

For more information, visit the Camo Lab website.

Innes Cuthill, Nick Scott Samuel, Roland Baddeley


Colour and signalling in animals and plants and Iridescence in nature: structure and function

Colour in nature can and, for a complete understanding, must be studied from multiple perspectives. The mechanisms of colour production include not only pigments, but also the properties of cell surfaces and structures within insert body coverings such as skin, hair, feathers and cuticle. These are behind the intense, direction-dependent and hue-changing iridescent colours seen in a hummingbird’s throat patch or a jewel beetle’s wing cases. How such colours are produced raises fascinating research questions from the photonics of production to their function and evolution. The latter two topics require modelling of visual perception, and cognitive mechanisms such as learning and memory, in the receivers of the colour signals, whether intended (e.g. a mate to be impressed) or not (a predator seeking prey). This integrated approach also sheds light on important applied questions ranging from plant-pollinator ecology to the design of warning signage in urban environments.

Nick Roberts, Heather Whitney, Innes Cuthill, Nick Scott-Samuel

More research

 

For more information on all of our research projects please email bvi-enquiries@bristol.ac.uk