ScienceDaily (Oct. 5, 2012) ? Tokyo Institute of Technology researchers use fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.
Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can 'think' and 'see' in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.
The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently 'label' each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).
After 'training' the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.
Share this story on Facebook, Twitter, and Google:
Other social bookmarking and sharing tools:
Story Source:
The above story is reprinted from materials provided by Tokyo Institute of Technology, via ResearchSEA.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Journal Reference:
- Hiroyuki Akama, Brian Murphy, Li Na, Yumiko Shimizu, Massimo Poesio. Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study. Frontiers in Neuroinformatics, 2012; 6 DOI: 10.3389/fninf.2012.00024
Note: If no author is given, the source is cited instead.
Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.
unlv sam young ncaa bracket ramon sessions portland trail blazers nba trade blagojevich
No comments:
Post a Comment