MMSys 2019 banner
ACM Multimedia Systems Conference Amherst, MA, USA, June 18 - 21, 2019
Announcements: Pictures from MMSys'19 now available here.

Keynotes and Overview Talks

Keynotes

Jordi Cenzano, Director of Engineering for Advanced Technologies at Brightcove, “Challenges of livestreaming at scale”

Jordi Cenzano

Livestreaming could be quite challenging, mainly because nowadays OTT is replacing typical linear TV solutions (cable, satellite, and OTA), and the viewers expects at least the same level of quality, reliability an features as before. The big challenge of OTT is to achieve that same UX using, most of the cases, technologies that are not designed for real time communications and usually using not guaranteed and volatile resources. But do not worry, there are good news too! Those new ways of delivering content opens the door to a set of features that were not possible before, such us stream personalization, direct viewer feedback, etc

Jordi is currently working on research projects at Brightcove as a director of engineering for advanced technologies. He first joined Brightcove in 2014 as a principal engineer. He got to work immediately, leading the charge in revamping both Zencoder Live and Brightcove Live products. Prior to joining the Brightcove team, Jordi worked as a broadcast engineer, and in 2007, he earned the title of chief technology officer for a mid-size Spanish broadcaster. He loves technology, coding, and working with cutting-edge innovations, and outside of Brightcove, Jordi loves playing sports.


Weidong Mao

Multichannel video providers are driving the transition to next generation IP video architecture to deliver linear, VOD, and cloud DVR services to millions of consumers. The services are being offered to primary screen set-top boxes and second screen devices, both inside and outside of the home. This talk will present key trends, challenges, and solutions in technology and standard for end to end IP video solution including advanced compression such as AVC and HEVC, DASH adaptive streaming, Dynamic Ad Insertion (DAI), Alternative Content, Content Delivery Network (CDN), Cloud DVR, and Digital Rights Management (DRM). In addition, this talk will discuss scalability and resiliency solution for next generation IP video architecture.

Weidong Mao is currently Senior Fellow at Comcast Cable. He is a leading industry expert in digital video, including IP Video, Advanced Compression, CDN, Video On Demand, and Cloud DVR. Prior to joining Comcast, he was Chief Architect at Liberate Technologies, a provider of interactive TV software platform. Previously, he was co-founder and CTO of MoreCom that was acquired by Liberate. He also served as Director of Telecommunication Unit of Motorola. He was Senior Member of Research Staff at Philips Labs. He received M.A. and Ph.D. in electrical engineering from Princeton University in 1991, and B.S. in electrical engineering from Peking University, Beijing, CHINA in 1986. He has 32 awarded patents and numerous publications and presentations. He is author or co-author of various standard specifications (MPEG, SCTE, CableLabs). He was awarded to IEEE Fellow in 2014 for contributions to video on demand technologies and cloud computing. He is a member of ACM, SCTE, and SMPTE. In 2018, he was awarded IEEE Philadelphia Section Benjamin Franklin Key Award for outstanding technical innovation.


Nimesha Ranasinghe, Multisensory Interactive Media Lab, University of Maine, “Experience media: moving towards an age of digital experience”

Nimesha Ranasinghem

When humans interact with the outside world or one another, all of the senses are engaged; a true conversation is considered a full sensory experience. From early ages to the present world, people desire to have multisensory experiences in every aspect of their lives. From trying different foods, going to different places to playing games on virtual reality, they continuously seek sensory stimuli to be a wholesome experience, yet the current technology lacks the inclusion of many essential sensory channels. This talk highlights several research works focusing on “Experience Media” that explore possibilities for novel multisensory interactive digital media technologies towards achieving total immersion in day-to-day digital interactions. This talk also emphasizes the need for looking beyond the current ‘age of information’ and step into a new ‘age of experience.’

Nimesha Ranasinghe is an Assistant Professor at the School of Computing and Information Science and directs the Multisensory Interactive Media lab (MIM lab - www.mimlab.info/) at University of Maine. He completed his Ph.D. at the Department of Electrical and Computer Engineering, National University of Singapore (NUS) in 2013. Dr. Ranasinghe’s research interests include Multisensory Interactive Media, Human-Computer Interaction, Augmented and Virtual Reality. He is well-known for his Digital Taste (a.k.a. Virtual Flavors) and Virtual Cocktail (Vocktail) inventions and featured in numerous media around the world including New Scientist, New York Times, Time Magazine, BBC Radio, Discovery Channel, and Reuters. Furthermore, he has published his work in several prestigious academic conferences and journals including ACM conference on Human Factors in Computing Systems (CHI), ACM conference on Multimedia, and Journal of Human-Computer Studies. He has received numerous awards for his research works; in 2014 his work on Digital Lollipop was selected as one of the ten best innovations in the world by the netexplo forum in UNESCO HQ, Paris.


Overview Talks

Dave Oran, Independent Researcher, “In-Compute Networking and In-Network Computing - the great confluence”

UMass Campus 2

Two contemporaneous trends are merging the two separate yet interdependent technologies of computing and networking. Hardware and software historically associated with computing complexes (General-purpose CPUs running virtual machines, conventional operating systems, and languages) are being used more and more to host networking functions at all but the highest speed tiers. Networking devices such as switches, routers and NICs are becoming programmable in ways that allow general purpose computing to be done “in the network”. This talk examines these trends, presents some of the salient research illuminating the advantages and limitations, and speculates on where this merging of technologies might take us.

David Oran was until 2016 a Fellow at Cisco Systems. He is now independent and pursuing his research interests in a number of areas, including in-network computing and Information Centric Networking. His recent work has been in congestion control for ICN and using ICN as a substrate for modern distributed computing languages. His long term technical interests lie in the areas of Quality of Service, Internet multimedia, routing, and security. He was part of the original team that started Cisco’s Voice-over-IP business in 1996 and helped grow it into a multi-billion dollar revenue stream.

Prior to joining Cisco, Mr. Oran worked in the network architecture group at Digital Equipment, where he designed routing algorithms and a distributed directory system. Mr. Oran has led a number of industry standards efforts. He was a member of the Internet Architecture Board, co-chair of the Speech Services working group, and served a term as area director for Routing in the IETF. He currently serves as co-Chair of the Information Centric Networking Research Group of the IRTF. He was on the board of the SIP Forum from its inception through 2008. He also serves on the technical advisory boards of a number of venture-backed firms in the networking and telecommunication sectors.

Mr. Oran has a B.A. in English from Haverford College.

Alia Sheikh, Senior Development Producer, BBC R&D, “Experiments in Producing Immersive Content”

Alia Sheikh

We have seen a push towards Virtual Reality stories driven largely by a ‘wow’ factor. If the true potential of the medium lies in allowing the audience to glimpse previously invisible perspectives and to feel connected to narratives in ways that have not previously been possible, how can we best present stories to take advantage of this potential? This session will give an overview of some of BBC Research & Development’s work in this area.

Alia Sheikh is a director, producer, research scientist and a broadcast research engineer. Her work is focused on ensuring that the world’s oldest national broadcasting organisation is able to understand the language and the narrative potential of future immersive formats.

Walt Husak, Director of Image Technologies, Dolby Labs, Inc, “Development of a High Dynamic Range (HDR) Ecosystem”

Walt Husak

Content capture, distribution and presentation systems have long surpassed the capabilities of Standard Dynamic Range (SDR) video standards. SDR video standards and practices were developed to service Cathode Ray Tube (CRT) capture, analog distribution, and CRT display systems. The transition to digital and High Definition TV maintained the conventional SDR capabilities even though device capabilities were improving. Further advances in imaging sensors, deployment of higher bit depth video codecs and display devices with superior contrast ratios offered the opportunity to radically improve the audience’s user experience.

The rich variety of source and presentation device capabilities was far different than the uniform capabilities of traditional SDR devices. The migration from SDR to HDR required improved technology and production practices at nearly every point in the content chain. The scale of changes meant an entirely new entire ecosystem needed to be developed and deployed.

This presentation will discuss the development of the HDR ecosystem including requirements, considerations and specific distribution use cases. Geographic and commercial considerations will be introduced along with solutions to mitigate these concerns. A high-level discussion of proposed and deployed HDR systems will also be presented.

Walt Husak is the Director of Image Technologies at Dolby Labs, Inc. He began his television career in 1990 at the ATTC where he carried out video objective measurements and RF multipath testing of HDTV systems proposed for the ATSC standard. Walt also worked on issues related to global HDTV deployments including topics such as video compression, DTV RF transmission and overcoming multipath signals in urban and rural environments.

Walt joined Dolby in 2000 as a member of the CTO’s office working on video compression and imaging systems for Digital Cinema and Digital Television. He has managed or executed visual quality tests for DCI, ATSC, Dolby, and MPEG and is currently focusing his efforts on HDR. Walt has authored numerous articles and papers for several major industry publications and holds several patents. He has also been recognized as participating in three teams that have received Emmy awards.

Walt is a member of SMPTE, MPEG, JPEG, ITU-R, ITU-T, and SPIE. He is the Chairman of the USNB committee for JPEG. He also chaired numerous AHGs, Study Groups and Sub-Groups in ATSC, JPEG, MPEG and SMPTE. He has served as the liaison between SMPTE and JPEG/MPEG for the last fifteen years. Walt was made a SMPTE Fellow on the 100th anniversary of SMPTE.


Sponsors

ACM logo SIGMM logo

Co-sponsors

SIGCOMM logo SIGMOBILE logo SIGOPS logo

Gold supporters

Adobe logo Netflix logo YouTube logo

Silver supporters

Bitmovin logo Comcast logo DASH-IF logo Unified Streaming logo Dolby Digital logo Brightcove logo