The Homeview Project


One of the most fundamental questions in psychology concerns the role of experience. What are the essential components of human experience? What are the malleable points in which small differences in experience can lead to different developmental outcomes? What are the mechanisms that underlie developmental change?





Answering all these questions requires that we know much more than we do about everyday experience. With these larger questions in mind, the Homeview project is collecting a large corpus of infant perspective scenes (using head cameras) and audio in the home as infants 1 to 24 months of age go about their daily life.






Watch our video. IU professors Linda Smith, David Crandall and Karin James talk about how IU Bloomington's first Emerging Areas of Research initiative, called "Learning: Machines, Brains, and Children," will revolutionize our understanding of how children, and robots, learn. The Homeview Project is a core part of this larger initiative. Learn more here.

This project is funded by the National Science Foundation, and in part by an Emerging Area of Research Award from Indiana University.



Back to the top

The Corpus


The corpus, with over 500 hours of head camera video promises new insights into the natural statistics of visual experiences for visual development generally, for visual object recognition, for human face perception, and for object name learning. We extract images from the video at 1 per second creating an image corpus of nearly 2 million images.







Participants were 91 infants (46 female, 45 male) aged 1 to 24 months from middle class families in Monroe County, Indiana who were recruited through county birth records and community events.






We have an additional 40 participants (24 female, 16 male) aged 1 to 15 months from a fisherman community in Chennai, India.













Back to the top

The Head Camera


Recording the availability of faces in infants' everyday environments requires a method that is not disruptive of those daily environments. Accordingly, we use a wearable camera that was lightweight, cable-free, attached to daily-wear hats, and easy for parents to use.







The head camera was the commercially available Looxcie 2. The camera has three critical properties relevant to this study: a very lightweight 22g body, built-in recording capacity, and a rechargeable non-heating battery. Each camera has a recording capacity of 3-4 hours of video at the rate of 30 frames per second.





The camera was attached to a snug fitting hat. Parents were given two hat-camera systems and asked to record up to 6 hours of video. Video was stored on the camera until parents had completed their recording and then transferred to laboratory computers for storage and processing.








Back to the top

Procedure


In a pre-visit, parents were informed about the goal of the study, consent was obtained, and they were instructed to use the camera. A hat was selected and fit to the child. Subsequently, the materials were delivered to the infant's home and the parents were reinstructed in the use of the camera.













Parents were not told that we were interested in faces or social events but were told that we were interested in visual development and the typical range of visual experiences of their infant. They were asked to record during the infants' waking hours and to try to capture four to six hours of video during daily activities when the infant was awake and alert.





Because of the complexity and demands of parenting young infants, they were given up to two weeks to complete their recording. The cameras were collected once parents had completed their recording; parents were debriefed and consistent with the consent procedure were asked if they wanted any segments deleted.





Back to the top

Coding the Corpus


The videos collected from parents were screened for privacy and accidental recordings (1.5% of total recording) and those sections were subsequently removed from the dataset.

Trained coders used one of two techniques to answer specific research questions about infant experiences, like "what proportion of the recording contained footage of human faces?" or "what do the visual scenes of mealtimes look like?".


Continuous video: Coders would watch the videos and annotate segments of interest. These segments would in turn be coded for more specific questions.



Down-sampled images: Videos were converted to images which were in turn selected at specific time intervals (typically one image selected from every 5 seconds of video). Coders would annotate each image to answer specific questions




The corpus coding is an ongoing process and changes with each major research question that we ask. Research papers based on this project contain detailed descriptions of how we coded for each research questions.


Back to the top

Sample Data


In the Lab

Parent and Toddler Free Play

child viewing toys on floor child viewing toys on floorParent View color view of child viewing toys on floor Child View


In the Home

These videos were obtained in the home setting. See how selective our momentary view of the world is and how it changes with development.

A 4-month-old in baby seat looking.

A 7-month-old crawling.

A 12-month-old playing with toys.

A 13-month-old toddler walking.


Back to the top

Publications and Conference Proceedings


Smith, L. B. & Slone, L. K. (2017) A Developmental Approach to Machine Learning?. Frontiers in Psychology, 8:2124. PMCID: PMC5723343

Clerkin, E. M., Hart, E., Rehg, J. M., Yu, C. & Smith, L. B. (2017) Real-World Visual Statistics and Infants' First-learned Object Names. Philosophical Transactions of the Royal Society B, 372. PMCID: PMC5124080

Jayaraman S., Fausey C. & Smith LB (2017) Why are faces denser in the visual experiences of younger than older infants?. Developmental Psychology, 53(1), 38-49. PMCID: PMC5271576

Fausey, C. M., Jayaraman, S. & Smith, L. B. (2016) From faces to hands: Changing Visual Input in the first two years. Cognition. 152, 101-107. PMCID: PMC4856551

Bambach, S., Crandall, D., & Smith, L. B. (2016) Active Viewing in Toddlers Facilitates Visual Object Learning: An Egocentric Vision Approach. In 38th Annual Conference of the Cognitive Science Society.

Bambach, S., Smith, L. B., Crandall, D., & Yu, C. (2016) Objects in the Center: How the Infant's Body Constrains Infant Scenes. In IEEE 6the Joint International Conference on Development and Learning and Epigenetic Robotics. [Distinguished Oral Presentation Award]

Jayaraman S., Fausey C. & Smith LB (2015) The Faces in Infant-Perspective Scenes Change over the First Year of Life. PLoS ONE, 10(5). PMCID: PMC4445910


Back to the top

Powerpoints and Posters


Linda B. Smith has been honored with the 2017 IU Distinguished Faculty Research Lecture. Her lecture on March 29th at the IU Cinema was titled "How Babies Learn Words and Developing Environments".

child viewing toys on floor color view of child viewing toys on floor

Tay, C., Smith, L.B. & Yu, C. (2017, April) Slow Change: The Visual Context for Real World Learning. Poster presented at the Society for Research in Child Development at Austin, Texas.

DeSerio, C. (2017, April) Developmental Changes in Natural Visual Statistics. Talk presented at Developmental Seminar at Psychological and Brain Sciences, Indiana University, Bloomington.

Abney, D.H., Jayaraman, S., Fausey, C.M., Slone, L.K. & Smith, L.B. (2017, March) Burstiness Dynamics and Nested Information in Naturalistic Infant-Perspective Scenes. Poster presented at the International Convention of Psychological Science, Vienna, Austria.

Clerkin, E. M., Yu, C., & Smith, L. B. (2016, November). Word-learning from visual prevalence: evidence from first-person infant views. Lecture presented at Boston University Conference on Language Development in Boston, Massachusetts.

Jayaraman, S. (2016, January) The Everyday Distribution of Infant Visual Ecology. Talk presented at University of California, Davis.


Back to the top