LuckyGirl MEDIA recommends

Choices and Trends for Women "from Teens to Grandmothers"

Technology SMPTE 2013 Full Program

The Society of Motion Picture and Television Engineers have their annual Technical Conference, some of which can be viewed  by streaming.

Tune in here: www.smpte2013.org

Program for SMPTE 2013 Annual Technical Conference

Time Exhibit Hall Hollywood Ballroom Mount Olympus Room Salon 1 Salon 2 Theatre (Chinese 6)

Monday, October 21

09:00 SMPTE 2013 Symposium

Tuesday, October 22

09:00 Welcome and Open
09:15 Opening Keynote
10:15 Break
10:45 Digital Cinema The move to the all-IT Facility – Part 1
12:30 Industry Luncheon
14:15 Multi-view Production The move to the all-IT Facility – Part 2
15:45 Break
16:15 Multi-view Post Production The move to the all-IT facility – Part 3
18:00 Opening Night Reception

Wednesday, October 23

09:00 Evolving Technologies for Broadcast Facilities – Part 1 Storing and Protecting Your Assets – Part 1 Audio Developments for Cinema and Broadcast
10:30 Break
11:00 Evolving Technologies for Broadcast Facilities – Part 2 Storing and Protecting Your Assets – Part 2
12:30 Fellows Luncheon
14:15 Color Management Advancements in Image Processing – Part 1
15:45 Break
16:15 Stereoscopic 3D Imaging, Processing, Distribution and Display File Based Workflows
18:00 Annual Membership Meeting

Thursday, October 24

09:00 Cloud-based Systems – Part 1 Advancements in Image Processing – Part 2
10:30 Break
11:00 Cloud-based Systems – Part 2 Advancements in Image Processing – Part 3
12:30 Boxed Lunch in Exhibit Hall
14:00 More, Better, Faster Pixels – Part 1 Advancements in Image Processing – Part 4
15:30 Break
16:00 More, Better, Faster Pixels – Part 2 Cinematography
19:00 Honors & Awards Ceremony and Dinner
22:00 Afterparty and SMPTE Jam

Monday, October 21

09:00 – 18:00

SMPTE 2013 Symposium

Next Generation Imaging Formats; More, Faster, and Better Pixels

Room: Salon 1

This one-day technical Symposium will provide you with an understanding of the technology landscape, separating fact from fiction.

Program Chair, Skip Pizzi, NAB Senior Director, New Media Technologies
Technical Track – Join the experts and discuss the next generation of image formats including discussions on higher frame rates, wider color gamut and increased dynamic range, along with 4K (UHD-1) and 8K (UHD-2) resolutions. Offering a clear picture of the current technology landscape, the Symposium will be valuable to anyone responsible for delivering high-quality imaging in broadcast, Internet, cinema, and broadband applications.

Led by Chris Chinnock, President/Founder of Insight Media
Business Track – Focused on the business, investment, and ROI issues associated with broader adoption of 4K/UHD equipment and content in consumer and professional markets. This track will aid senior business executives, decision makers, product planners, and investors in understanding the complex issues involving the roll out of 4K. A series of panel discussions will examine fundamental questions around the timing, positioning, and monetization of investments in the 4K/UHD ecosystem.

Tuesday, October 22

09:00 – 09:15

Welcome and Open

Room: Salon 1

09:15 – 10:15

Opening Keynote

Room: Salon 1
09:15 Opening Keynote
Thomas Gewecke (Warner Bros., USA)
Chief Digital Officer and Executive Vice President for Strategy and Business Development
Presenter bio: Thomas Gewecke was named Chief Digital Officer and Executive Vice President, Strategy and Business Development, Warner Bros. Entertainment in May 2013. In this position, he is responsible for driving the Studio’s worldwide digital growth and managing its global business strategy. Gewecke is charged with coordinating the company’s various digital distribution strategies to maximize the value of Warner Bros.’ content across all current and emerging digital exhibition platforms. He also oversees Warner Bros. Technical Operations and Corporate Business Development as well as Warner Bros. Home Entertainment’s Direct-to-Consumer, Business Development and Flixster groups. Prior to his current role, Gewecke served as President of Warner Bros. Digital Distribution, a position he had held since January 2008. Under Gewecke, WBDD expanded its global footprint to make the Studio’s movie and television content available for transactional purchase in every major market, through more than 100 digital retail partners, and was consistently recognized as a leader and innovator in the distribution of digital content. WBDD led the industry in offering On Demand movies day-and-date with DVD release, bundling digital copies with new physical releases worldwide and making movies available directly to fans through new channels, including Facebook and as international “App Editions” on the iOS platform. To expand consumer access to the Studio’s vast library, WBDD launched Warner Archive, a groundbreaking service that makes more than 1,000 “out of print” classic movie and TV titles available online and through manufacturing-on-demand technology. WBDD also develops and distributes critically acclaimed digital original programming and content, including H+ The Digital Series, named Best Action or Sci-Fi Series at the 2013 Streamy Awards, and the Inside the Script series of digital scripts, which received a 2013 Publishing Innovation Award. The division supported the record-breaking 2013 Kickstarter campaign for The Veronica Mars Movie, which it will distribute in early 2014. Gewecke oversaw the Studio’s May 2011 acquisition of Flixster and Rotten Tomatoes, a leading mobile movie service and online aggregator of movie reviews. Flixster was the first service to offer access to UltraViolet, the industry standard for storing digital movie collections in the cloud, and is used on the web or mobile devices by more than 30 million people every month, globally. Flixster serves as a leading provider of UltraViolet EST, Digital Copy, and Disc to Digital services. Gewecke joined WBDD from Sony BMG Music Entertainment, where as Executive Vice President, Global Digital Business he was responsible for worldwide digital business development efforts and oversaw the creation of new digital ventures. In addition, Gewecke oversaw the company’s online advertising, digital long-form video and direct-to-consumer commerce businesses, and led its expansion into the US off-deck mobile market. Prior to Sony BMG, Gewecke was Senior Vice President of Business Development in the Digital Services Group for Sony Music Entertainment, Inc, where he founded and led its mobile business, building a leading market position for the company, launching the industry’s first U.S. master ringtone offering in mid-2003 and becoming the only record label to create and operate a major U.S. on-deck, direct-to-consumer mobile storefront. Gewecke also supervised Sony Music’s interest in certain digital media investments. Gewecke also held the position of Publisher of the PC World Online Network at International Data Group, a leading publisher of print and online technology magazines. During his tenure with the company, he built IDG’s largest online publishing division and oversaw the launch of multiple Internet properties. He also played a major role in the development of IDG’s overall digital strategy. Gewecke graduated from Harvard College in 1991 with a bachelor’s degree, magna cum laude, in social studies.
Thomas Gewecke

10:15 – 10:45

Break

30 minutes

Rooms: Salon 1, Salon 2

10:45 – 12:15

Digital Cinema

Three 30-minute Papers

Room: Salon 1

Chair: Peter Ludé (Consultant, USA)
10:45 Full accessibility system for digital cinema using WIFI
Israel Gonzalez-Carrasco (Universidad Carlos III de Madrid, Spain); Ángel García Crespo (Universidad Carlos III de Madrid, Spain); José López-Cuadrado (Universidad Carlos III de Madrid, Spain); Belén Ruiz (Universidad Carlos III de Madrid, Spain)
The 5% of the world population has a sensorial disability (visual or hearing). Movies are part of our culture and they are not being accessible for these persons. Nowadays, studies show that tickets sales are increasing 10% for accessible movies. We present a low cost WiFi technology to transmit the audiodescription to the spectator`s smartphone. This system also sends the subtitles (in multiple languages), and the sign language interpreter to Smartphone or augmented reality glasses. The cost is only 10% of the old systems and allows its deployment without interference. In Spain several theaters are using this system successfully.
Presenter bio: Israel Gonzalez-Carrasco is an assistant professor in the Computer Science Department of Universidad Carlos III of Madrid. He holds his PhD degree in Computer Science by this University. He is co-author of several contributions published in international congresses and journals. His main lines of research are Neural Networks, Expert Systems and Software Engineering applied to different domains. He is involved in several international projects and is a member of the editorial and the reviewer board several international journals.
Israel Gonzalez-Carrasco
11:15 A Study on Laser Illuminated Digital Cinema Projector Safety
Peter Ludé (Consultant, USA); Dave Schnuelle (Dolby Laboratories, Inc., USA); David H. Sliney (Consulting Medical Physicist & Department of Environmental Health Sciences, Johns Hopkins Bloomberg School of Public Health, USA)
New laser illuminated digital cinema projectors promise many advantages over today’s xenon lamp projectors. However, current safety regulations fail to distinguish between a pure laser beam and the highly processed optical emissions of laser projectors, resulting in undue regulatory burden. It is the high radiance of an unprocessed laser beam that requires safety precaution. Laser illuminated cinema projectors process the laser source in a manner that causes the light output from the projector lens to be nearly identical to light output from a xenon lamp projector, largely eliminating the hazard. This paper presents the results of a comprehensive field test comparing the ocular hazards of various cinema projectors, including 35mm film projectors, xenon lamp and laser illuminated digital projectors. The results confirm that projected light output is similar, whether from a lamp or laser source. This new data is expected to aide in revising relevant safety regulations.
Presenter bio: Pete Ludé is prominent engineering leader in broadcast and digital media. He has been involved in designing and implementing hundreds of broadcast and media systems for television networks, film studios, satellite, cable and mobile production. Most recently, Pete was Senior Vice President of Sony’s Silicon Valley R&D labs, where his work included workflow software for movie production, stereoscopic imaging and the next generation of 4K laser projectors for digital cinema. He is past president of SMPTE – the Society of Motion Picture and Television Engineers — and a SMPTE fellow. Pete is also co-founder and Chairman of the Laser Illuminated Projector Association, and is a frequent speaker on the future of broadcast, Ultra HD television, stereoscopic 3D and laser displays.
Peter Ludé
11:45 Gamut Mapping for Digital Cinema
Jan Froehlich (University of Tuebingen & Stuttgart Media University, Germany); Andreas Schilling (University of Tuebingen, Germany); Bernd Eberhardt (Stuttgart Media University, Germany)
To ensure consistent presentation of wide gamut Digital Cinema Packages (DCPs) on standard gamut screens, a mandatory gamut mapping strategy has to be chosen. In this paper, current gamut mapping algorithms are evaluated with respect to their application in digital cinema. These include: “Simple Clip”, “Cusp Clip”, “Minimum delta E” (MindE), “Hue preserving MindE”, “Weighted MindE” and the mapping strategy used in current projectors. The investigated gamut mapping algorithms will be provided as 3D lookup tables for comparison. These can also be used to retrofit standard gamut projectors with a more advanced gamut mapping strategy. Therefore, the paper closes with an analysis of the losses introduced by using 3D lookup tables for gamut mapping. The intent of this paper is to initiate a discussion about gamut mapping strategies for digital cinema. It may ultimately lead to an addendum to the SMPTE standards for digital cinema.
Presenter bio: Jan Froehlich (born 1979 in Freiburg, Germany) is PhD-Student at the University of Tuebingen. His research interests are high dynamic range imaging, gamut mapping and color management. Before starting his PhD he was Technical Director at CinePostproduction GmbH. Since then, Froehlich has been involved in a number of technically groundbreaking film projects, most recently Europe’s first animated stereoscopic feature film “Animals United.” Froehlich contributed to multiple research projects on new acquisition, production and archiving systems for digital television and cinema. Froehlich is member of IS&T, SMPTE, FKTG, and the German Society of Cinematographers (BVK).
Jan Froehlich

The move to the all-IT Facility – Part 1

Three 30-minute Papers

Room: Salon 2

Chair: Al Kovalick (Media Systems Consulting, USA)

The modern media facility is a hybrid mix of traditional AV and IT gear. Workflows rely on combinations of file-based and stream-based methods. This session focuses on the premise that IT will become the basis for the facility infrastructure of the future. It’s momentum is unchallenged and vendors are learning how to leverage its strengths to create products across a broad range of function. Commodity Ethernet networking is starting to challenge SDI and other purpose built links. The Cloud is making inroads. The papers in this session will set the stage for the facility to come. They will show what is possible now and provide a glimpse of what an all-IT facility could look like.

10:45 The Evolution to Network-Distributed Genlock
Paul Briscoe (Harris Broadcast, Canada)
The upcoming ST 2059 suite of standards specify a system which enables distribution of genlock signals including timecode over an IP network. This paper looks in depth at the systemization considerations when building a transitional facility that incorporates network-distributed references into an existing multi-distribution facility. Beginning with an overview of the technology under the hood, network considerations are discussed. Requirements for network switch components are reviewed and redundancy architectures and external reference issues are explored. Finally, an evolutionary model is examined which shows how a networked reference infrastructure can be implemented in harmony with an existing system allowing a smooth transition to new technologies.
Presenter bio: Paul Briscoe is Manager of Strategic Engineering with Harris Broadcast. In his 19 years with Harris Broadcast (formerly Leitch Technology), Paul has occupied the roles of Product Developer, Project Leader, R&D Group Manager and Manager of Engineering. His current focus at Harris is on standards and interoperability, and he is an active participant in numerous SMPTE standardization activities. Prior to joining Leitch / Harris, Paul worked at CBC Television as a lead System Designer of the Canadian Broadcast Centre, one of the first all-digital TV and radio production and broadcasting facilities. His focus was on plant routing systems, centralized resources, computer graphics, T&M and system timing, in addition to working with vendors to define and qualify the then-new SDI-based products. He attended the University of Waterloo, is an active Radio Amateur and avid curler, is a member of SMPTE and IEEE and is currently Chair of the Toronto SMPTE Section.
Paul Briscoe
11:15 Analysis of PTP Locking Time on non-PTP Networks for Genlock over IP
Nikolaus Kerö (Oregano Systems, Austria); Tobias Müller (University of Applied Sciences Technikum Wien, Austria); Thomas Kernen (Cisco, Switzerland); Mickael Deniaud (Cisco Systems, Switzerland)
With the integration of IP based systems into broadcast architectures genlocking devices need to be transposed in this environment. Within SMPTE 33TS, an IEEE-1588 profile suited for the production industry is under definition. PTP has been widely adopted in other industries to synchronize nodes in asynchronous networks such as Ethernet. If PTP is used for synchronizing broadcasting equipment replacing systems like color black, locking times of five seconds are required to facilitate frequent changes in the network topology offering the same availability as analog systems. After describing ways to obtain short lock times results are presented for a three hop network. Different classes of PTP unaware components were used ranging from older generation desktop devices to current generation data-center units with line-rate switching capabilities. The lock time was measured using different PTP message rates as well as default and expedited forwarding for PTP traffic while applying various network load conditions.
Presenter bio: After receiving a Masters Degree in Communication Engineering with distinction from the Vienna University of Technology, Nikolaus led the ASIC design division at the university’s Institute of Industrial Electronics, successfully managing numerous research projects and industry collaborations. His research activities centered on distributed systems design, especially highly accurate and fault- tolerant clock synchronization. In 2001 he co-founded Oregano Systems Design & Consulting Ltd. as a university spin-off. While offering embedded systems design services to customers, Oregano transferred research results into a complete product suite for highly accurate clock synchronization under the brand name syn1588®, for which Nikolaus manages both development and marketing. He is an active member of the IEEE1588 standardization committee and the SMPTE 33TC standard group and holds frequent seminars on clock synchronization for both industry and academia.
Presenter bio: Thomas Kernen is a Consulting Systems Engineer working for the office of the CTO in Cisco’s European Borderless Network Team. His main area of focus is defining video architectures and transmission solutions for content providers, broadcasters, telecom operators and IPTV Service Providers. Prior to joining Cisco, Thomas spent ten years with different telecoms operators, including three years with an FTTH triple play operator, for whom he developed their IPTV architecture. Thomas is a member of the IEEE Communications and Broadcast Societies, and the Society of Motion Picture & Television Engineers (SMPTE). He is an active contributor within the Digital Video Broadcast (DVB) Project, SMPTE and various European Broadcasting Union (EBU) groups.
Nikolaus KeröThomas Kernen
11:45 The Principles of Low-Latency Media Centric IP Network Architectures
Thomas Kernen (Cisco, Switzerland); Steven Posick (ESPN Inc., USA)
With the move to Internet Protocol (IP) networks, the desire to produce low-latency, high throughput networks has increased in recent years. But low-latency and high throughput do not necessarily go hand-in-hand. In fact there is somewhat of an inverse relationship between the two, despite the fact that latency plays a critical role in TCP/IP throughput. Microbursts, buffer/queue management, and poor software design all play significant roles in increasing latency. In this paper, we will discuss the key design principles for the creation of low-latency, high throughput IP networks for real-time media centric applications.
Presenter bio: Thomas Kernen is a Consulting Systems Engineer working for the office of the CTO in Cisco’s European Borderless Network Team. His main area of focus is defining video architectures and transmission solutions for content providers, broadcasters, telecom operators and IPTV Service Providers. Prior to joining Cisco, Thomas spent ten years with different telecoms operators, including three years with an FTTH triple play operator, for whom he developed their IPTV architecture. Thomas is a member of the IEEE Communications and Broadcast Societies, and the Society of Motion Picture & Television Engineers (SMPTE). He is an active contributor within the Digital Video Broadcast (DVB) Project, SMPTE and various European Broadcasting Union (EBU) groups.
Presenter bio: Steven Posick, associate director, Enterprise Software Development, joined ESPN in 1995. He is a veteran senior systems architect, designer, developer, and security professional, with more than 23-years-experience in Information Technology and a 10 year focus on Media: Identity, Management, and Control. His responsibilities have included the management of production workflow application development, broadcast control systems, broadcast system security and the development of open standards. Steven has participated in several SMPTE committees as an Ad hoc Group chair and/or document editor, including the recently published SMPTE standard for Media Device Control over Internet Protocol Networks (SMPTE ST2071), the Archive eXchange Format, and the Study Group on Media Production System Network Architectures.
Thomas KernenSteven Posick

12:30 – 14:00

Industry Luncheon

Room: Hollywood Ballroom
12:30 Industry Luncheon
David Gibbons (Ustream, USA)
VP Product Marketing
Presenter bio: David is responsible for guiding the evolution of Ustream’s unique online live video solution as it expands to meet the needs of thousands of broadcasters from entertainers to consumers, TV companies to non-profit organizations. Prior to Ustream David held management positions at online video company Ooyala, and at audio/video production equipment vendor Avid Technology. He has guided product strategy for multiple product lines generating over $300M in annual revenues. He studied Electronic Engineering at the National University of Ireland, and Audio Engineering at Kingston University in London, England.
David Gibbons

14:15 – 15:45

The move to the all-IT Facility – Part 2

(Three 30-minute Papers)

Room: Salon 2

Chair: Al Kovalick (Media Systems Consulting, USA)

The modern media facility is a hybrid mix of traditional AV and IT gear. Workflows rely on combinations of file-based and stream-based methods. This session focuses on the premise that IT will become the basis for the facility infrastructure of the future. It’s momentum is unchallenged and vendors are learning how to leverage its strengths to create products across a broad range of function. Commodity Ethernet networking is starting to challenge SDI and other purpose built links. The Cloud is making inroads. The papers in this session will set the stage for the facility to come. They will show what is possible now and provide a glimpse of what an all-IT facility could look like.

14:15 Media facility Infrastructure of the future
Eric Pohl (National TeleConsultants, USA)
The owners of contemporary television facilities are being faced with challenges from a number of directions. • The introduction and use of large format images in television program production creating motion images in excess of HD rates (2K and 4K). • The increasing reliance on IT storage and server technology for motion image storage and processing • The need to accept and provide content in multiple forms to multiple business partners • The need to be compatible with current large installed investment in SDI baseband infrastructures • The emergence of higher capacity IP based transport and routing as well as new standards for encapsulating HD video into IP This paper will review the evolution of large facility infrastructure’s and in the context of these new trends and offer a point of view on the characteristics and requirements of the “multi-resolution infrastructure of the future”.
Presenter bio: Eric Pohl has spent his career managing the development and implementation of new television technology. He currently is CTO of the design and consulting firm National TeleConsultants. Over the years, he has worked for broadcast networks, equipment manufacturers and post production facilities. He has received 3 Emmys for work on the Olympics for NBC. He has a BS in Electrical Engineering from Rensselaer Polytechnic Institute and his MS in Electrical Engineering from NYU/Poly.
Eric Pohl
14:45 Software Defined Network for Media Workflows
Tom Ohanian (Cisco, USA); Ammar Latif (Cisco Systems, USA)
Software defined networking (SDN) is receiving considerable attention in the Broadcast & Media industry due to its potential to bring innovation to Internet Protocol (IP) and Networking approaches. This is especially relevant and timely due to the pervasiveness of IP and Ethernet as the converged medium for carrying multiple services (video, audio, data, etc.) This paper will outline the SDN concept as it relates to Media IP workflows and broadband delivery of content. The paper will define Network programmability and network Application Programming Interfaces (API’s) and their role in achieving more granular control of network services, network analytics, and security. Finally, SDN use cases for Media IP workflows and the business benefits will be highlighted. Paper will be authored by Ammar Latif and Tom Ohanian, Cisco Systems.
Presenter bio: Tom Ohanian is a member of the Digital Media Strategy team at Cisco Systems. He was on the founding team at Avid Technology and is the co-inventor of the Avid Media Composer, Film Composer, and Multicamera Systems. He has extensive broadcast engineering, production, and post-production experience and is an Academy Award and two-time Emmy recipient for scientific and technical invention.
Presenter bio: Ammar Latif is a systems engineer with the Media team at Cisco Systems . His current focus is on IP archectures for Digital Media workflows in the content providers space . Ammar is a has supported a number of large service provider networks in North America with focus on advanced IP routing technologies . Ammar is a member of SMPTE . He has a Masters of Engineering degree from the University of Toronto and holds a CCIE certification .
Tom OhanianAmmar Latif
15:15 The Foundations of the All IT Media Facility
Al Kovalick (Media Systems Consulting, USA)
Over the past 10 years, file-based (IT) production and broadcast workflows have become mature. However, the migration of real time AV workflows to IT has taken a back seat, until now. With the advent of Software Defined Networks/Storage, 10G Ethernet, fast network switching, compute virtualization methods and widely available web apps (SaaS), the move to the “all IT facility” is on the horizon. This talk will describe the enabling technologies and their role in the migration. Technical obstacles and advances to resolve them will be described. Bottom line, a clear path to the all IT facility will be outlined.
Presenter bio: Al Kovalick has worked in the field of hybrid AV/IT systems for the past 20 years. Previously, he was a digital systems designer and technical strategist for Hewlett-Packard. Following HP, from 1999 to 2004, he was the CTO of the Broadcast Products Division at Pinnacle Systems. Currently, he is with Avid Technology and serves as an Enterprise Strategist and Fellow. Al is an active speaker, educator, author and participant with industry bodies including SMPTE and AMWA. He has presented over 50 papers at industry conferences worldwide and holds 18 US and foreign patents. He is the author of the book “Video Systems in an IT Environment, 2ed”. In 2009 Al was awarded the David Sarnoff Gold Medal from SMPTE. Al has a BSEE degree from San Jose State University and MSEE degree from the University of California at Berkeley. He is a SMPTE Fellow.
Al Kovalick

Multi-view Production

Three 30-minute Papers

Room: Salon 1

Chair: Howard Lukk (Disney, USA)

Capturing Stereoscopic content can be challenging. In exploring new ways to capture stereoscopic content, an idea of capturing monoscopic along with secondary cameras for image depth capture was put into practice. Walt Disney Studios field-tested a newly designed camera rig to prove out the complete workflow. This technique was named Hybrid 3D as it is not “native” and it is not “conversion.” It is designed to push all of the stereoscopic decisions to the post-production environment. One thing that became clear early in the process was the need to interchange depth/disparity maps. We will describe the rig, the computation of the depth information, and a new method for representing the depth information.

14:15 Tri-Focal Rig
Johannes Steurer (Arnold & Richter Cine Technik & ARRI, Germany)
The objective of this project is to design for the capture of live action stereoscopic content for feature films using a new method. The present stereoscopic rigs have limitations in their present designs. The Fraunhofer/ ARRI Tri-Focal Rig is a three camera system using a primary and two secondary cameras. It is designed to assure perfect synchronization of the main camera with all secondary cameras. This includes synchronization all the way to the front-end shutter of the camera. The system also provides a method for on set confidence monitoring of secondary cameras for both image and depth using the STAN technology.
Presenter bio: Johannes Steurer is Principal Engineer in the R&D department of Arnold & Richter Cine Technik, Munich. Currently he is responsible for research and technical innovations in the area of motion picture capturing with a focus on three-dimensional information (stereo-3D, depth-sensing, lightfield), representing ARRI in collaborative European and international research projects. Johannes received a Dr.-Ing. degree in electrical engineering from the Technical University Munich in 1992. Joining ARRI in 1994, he first was technical leader of the newly founded digital film postproduction group. Later he moved to R&D as project manager for the ARRILASER film recorder and was manager for the business unit Digital Intermediate Systems. He is recipient of several awards including an Academy Award® of Merit (OSCAR® Statuette) in 2012 for the design and development of the ARRILASER film recorder.
Johannes Steurer
14:45 Depth/Disparity Creation for Trifocal Hybrid 3D System
Ralf Tanger (Fraunhofer HHI, Germany); Marcus Müller (Fraunhofer HHI, Germany); Peter Kauff (Fraunhofer Heinrich-Hertz Institute, Germany); Ralf Schäfer (Fraunhofer Heinrich-Hertz-Institut, Germany)
Fraunhofer HHI has developed a software solution to create depth and disparity maps from a Tri-Focal Rig. Computational cinematagrophy techniques can be used to determine the depth of the items in the shot by matamatically comparing the differences in the images captured from each of the motion picture cameras after the initial photography is completed. By creating a 3D geometry of the scene and then projecting the images onto that 3D geometry a Stereoscopic movie can be created after principal photography has been completed. The software adapts disparity estimation algorithms to the specific needs of the multi-camera system. This includes in particular dedicated post-processing filters.
Presenter bio: Ralf Tanger graduated in Electrical Engineering at the Technical University of Berlin in 1996. From 1993 to 1997 he had been working for Daimler-Benz Research in the field of image classification with Neural Networks. In 1997 he joined the Image Processing Department of the Fraunhofer Institute for Telecommunications HHI. He was and is engaged in several European (COST211ter, NEMESIS, VIRTUE, 2020 3D Media, 3D4YOU, Muscade) and German research projects in the fields of segmentation, video conferencing and 3D video. Currently he is mainly interested in 3D video analysis, depth estimation and 3D formats. Ralf is a member of IEEE, SMPTE and VDI.
Ralf Tanger
15:15 Depth/Disparity Interchange Representation for Post-Production
Peshala Vishvajith Pahalawatta (Dynamic Digital Depth, USA); Kevin J Stec (Dynamic Digital Depth, Inc., USA)
Depth/disparity information is conducive to various post-production processes, such as compositing, editing, and alternate view rendering. Therefore, this paper describes a display agnostic framework for representing depth and disparity in post-production. The paper details the requirements and constraints associated with the representation. It also shows the manner in which the interchange framework can be used during compositing, conforming and editing. The framework simplifies the exchange of depth/disparity information gathered from Live and CGI sources. Finally, the paper analyzes the robustness of the representation to depth/disparity conversion errors and provides example scenarios in which the representation can be used.
Presenter bio: Peshala Pahalawatta received his MSc and PhD degrees in electrical engineering from Northwestern University in Evanston, IL in the years 2003 and 2007, respectively. While there, he was a member of the Image and Video Processing Laboratory headed by Prof. Aggelos K. Katsaggelos. In 2007, Peshala joined the video compression research team at Dolby Laboratories in Burbank, CA, where he worked on video coding, quality evaluation, and the standardization of 3D video coding technology. In 2012, Peshala joined Dynamic Digital Depth (DDD) USA in Los Angeles, CA as a senior research engineer in Image Technology. His interests lie in image and video compression and enhancement, 3D video processing, and subjective and objective video quality evaluation.
Peshala Vishvajith Pahalawatta

15:45 – 16:15

Break

30 Minutes

Rooms: Salon 1, Salon 2

16:15 – 17:45

The move to the all-IT facility – Part 3

(Three 30-minute Papers)

Room: Salon 2

Chair: Al Kovalick (Media Systems Consulting, USA)

The modern media facility is a hybrid mix of traditional AV and IT gear. Workflows rely on combinations of file-based and stream-based methods. This session focuses on the premise that IT will become the basis for the facility infrastructure of the future. It’s momentum is unchallenged and vendors are learning how to leverage its strengths to create products across a broad range of function. Commodity Ethernet networking is starting to challenge SDI and other purpose built links. The Cloud is making inroads. The papers in this session will set the stage for the facility to come. They will show what is possible now and provide a glimpse of what an all-IT facility could look like.

16:15 Video Processing in an FPGA-enabled Ethernet Switch
Thomas Edwards (FOX Network Engineering & Operations, USA); Warren Belkin (Arista Networks, USA); Andy Bechtolsheim (Arista Networks, USA)
Carriage of uncompressed HD video using IP holds great potential for enhancing the flexibility of broadcast plants while reducing the number of cables required through aggregation of signals using statistical multiplexing. The broadcast industry is just beginning to determine the appropriate architectures to best utilize professional video ­over ­IP capabilities. The Arista 7124FX Application Switch is a 10GbE data center class Ethernet switch that also supports application acceleration through the use of an on­board FPGA (Field Programmable Gate Array) that provides the processing capability of 32 2.5GHz cores without adding network jitter. A proof ­of ­concept has been developed to show how an FPGA ­enabled switch can perform some IP video processing functions such as frame accurate video stream switching of SMPTE 2022­6 RTP flows. Attention will be paid to demonstrating the requirements on the protocols, the flexibility of IP deployments and the key architectural issues at the firmware level.
Presenter bio: Thomas Edwards is VP, Engineering and Development for Fox Network Engineering and Operations. He is responsible for new technology strategy, testing, and evaluation. He has contributed to the SMPTE Task Force on 3D to the Home, and has been involved in FOX’s deployment of DVB-S2 satellite radio DTV distribution system. Previously he was Senior Manager, Interconnection Engineering for the PBS Interconnection Replacement Office where he was responsible for the engineering management of the Next Generation Interconnection System (NGIS). Before entering the television industry, he was the streaming media product manager at Cidera, where he developed a broadband desktop video channel for technical employees delivered using IP-over-satellite. Edwards holds a Master’s Degree in Electrical Engineering from the University of Maryland, is a member of IEEE and SMPTE.
Presenter bio: As Chief Development Officer, Andy Bechtolsheim is responsible for the overall product development and technical direction of Arista Networks. Previously Andy was a Founder and Chief System Architect at Sun Microsystems, where most recently he was responsible for industry standard server architecture. Andy was also a Founder and President of Granite Systems, a Gigabit Ethernet startup acquired by Cisco Systems in 1996. From 1996 until 2003 Andy served as VP/GM of the Gigabit Systems Business Unit at Cisco that developed the very successful Catalyst 4500 family of switches. Andy was also a Founder and President of Kealia, a next generation server company acquired by Sun in 2004. Andy received an M.S. in Computer Engineering from Carnegie Mellon University in 1976 and was a Ph.D. Student at Stanford University from 1977 until 1982.
Thomas EdwardsAndy Bechtolsheim
16:45 All-IP Video Processing of SMPTE 2022-6 Streams on an All Programmable SoC
Matt Klein (Xilinx, USA); Thomas Edwards (FOX Network Engineering & Operations, USA)
Realizing the functions of routing, switching and video processing equipment while utilizing standard networking equipment for video transport provides an evolutionary step toward gaining the benefits of professional media networking in broadcast environments. This paper describes a fully networked broadcast platform based on the Xilinx Zynq-7000 All Programmable System on a Chip (SoC) that performs live video processing, similar to that of traditional broadcast equipment switchers and routers, but uses 10GE networking interfaces for uncompressed video transport. HD-SDI video only enters and exits a 10GE network through SMPTE 2022-6 bridges based on the Xilinx Kintex-7 FPGA. The demonstration platform is connected using a standard off-the-shelf 10GE switch showing the vision of a networked broadcast facility.
Presenter bio: Matt is a Distinguished Engineer at Xilinx and has a broad background in hardware and system design spanning disparate areas like RF and Microwave and Broadcast equipment like Video Servers from 15 years at HP and 5 years at Pinnacle Systems. Matt has always used FPGAs because of the flexibility they offer for the Broadcast industry to adhere to evolving standards and has been at Xilinx for 9 years. For the last several years at Xilinx Matt has been a strong participant and driver in standards bodies like the VSF (Video Services Forum) and SMPTE and worked with these bodies, broadcasters, and equipment manufacturers on the Video Over IP standards for SMPTE 2022-1/-2 and SMPTE 2022-5/-6. Matt holds 8 patents and has more than 10 pending and has a B.S.E.E and M.S.E.E from Case Western Reserve and Santa Clara University. Matt is married with three children.
Matt Klein
17:15 IP and Media perfectly in tune? – Running an IT media facility in a predictable way
Luc Andries (SDNsquare & Gaston Crommenlaan 10 bus 101, Belgium)
Media organisations based on IP networks and IT technology systems are running in a continuous state of accepted (but unknown/unquantifiable) risk. The world of SDI and tapes gave us ‘guaranteed service’. The world of IP and IT promises us flexibility introducing a statistically very good but ‘best effort’ approach’. Growing demand in load, usage, formats etc. increases the stress and skews the statistical behaviour, increasing the chances of under-performance and failures. To cope with this, ‘over-provisioning’ has been the sole solution given by the industry. Leading to an ever increasing investment cost in network, storage and servers. And leading to a higher and higher inefficient usage of resources. Over-provisioning only helps to reduce or even mask part of the risk but doesn’t guarantee the performance and reliability of the installations. It still falls short in providing the strict predictability that an operational solution requires when it matters. However, technologies and solutions do exist today to make IP networks/IT systems behave in a predictable way. When applied in the right way IP networks and IT system can deliver guaranteed predictable performance, run much more efficient, scale up linearly and be managed more easily. ” This paper will review the ‘unpredictable behaviour’ associated with IP/IT technology, where it takes place and what needs to be done to eliminate it. It will show what can be done with existing technologies. Further it will show what the impact is of a fully predictable system on the manageability, efficiency and scalability of network and storage/data-center solutions for future media organisations.
Presenter bio: Luc Andries holds a Master’s Degree in Experimental Physics. After 15 years of experience in research and development and computer integrated manufacturing in the connector industry, Luc Andries joined VRT (the Flemish/Belgian Public Broadcaster) in 1998, first as a network and storage specialist in the IT department and later as infrastructure architect in the research lab. In 2005 he was appointed as top expert in the research lab. Today Luc Andries is CTO of SDNsquare and he is heading the SDNsquare research laboratory in Belgium. He uses his expertise in the areas of storage and network technology helping SDNsquare to develop proof and deliver bespoke solutions for its digital media customers. Luc’s research focuses on the modeling of storage and network technologies and media work flows/data flows, and how the technology can be used to meet the challenges faced by the digital media and broadcasting industries. Luc and his team are attempting to bridge the gap between IT and media. Luc is frequently asked as invited speaker at media conferences all over the world and is very well appreciated as consultant in media IT.
Luc Andries

Multi-view Post Production

Three 30-minute Papers

Room: Salon 1

Chair: Kevin J Stec (Dynamic Digital Depth, Inc., USA)

Getting depth information directly from the production process enables a whole new level of flexibility in post. Views can be shifted without the need to reshoot a scene, compositing may no longer require green screen capture, and merging of live action and visual effects becomes a straightforward process. Also, depth information is an important element in methods for distribution of multi-view content where the accuracy of the depth information is critical to performing high quality rendering at the display.

16:15 Disparity/Depth Generation and View Rendering
Simon Robinson (The Foundry, United Kingdom); Csilla Andersen (Dolby Laboratory, USA); Thaddeus Beier (Dolby Laboratories, USA)
The Foundry has created and supports a plug-in for the Nuke software suite specifically stereoscopic and autostereoscopic called Ocula. Ocula is capable of deriving and manipulating depth and disparity information during the post-production process including VFX and compositing. A common interchange format allows multiple vendors to work on different shots or scenes within a production while enabling interoperability. This paper will discuss the use of the common interchange format and its integration into Ocula. This paper will also touch on creating both stereo and multi-view renders for viewing in-process projects.
Presenter bio: Csilla Andersen is Product Marketing Manager for the award winning Dolby glasses-free 3D system. During the past years, Csilla has worked on stereo and autostereo technologies at Dolby and led the development of the end-to-end framework for content production, delivery and playback known as Dolby 3D. She is behind several partnerships, which have been created to develop and market the technology: The Foundry, Cameron & Pace Group, Akamai and several others in the movie and broadcast industry. Csilla led the product development of the plugin that has been created in joint efforts with The Foundry to derive and manipulate disparity information to achieve high-quality content playback on glasses-free 3D displays. Csilla is a M.Sc. in International Business Economics and worked at Deloitte. and HP Imaging and Printing Group before Dolby.
Presenter bio: Thad Beier is the Director of Image Platform Workflow at Dolby Laboratories. He is engaged with all aspects of making better images within Dolby, and providing tools for artists to go further than they have before. Before joining Dolby a year ago, Thad worked as an artist and software developer and researcher in visual effects and computer graphics since 1978, creating effects for the first four Fast and Furious movies among many others. He is a member of AMPAS, and has received a Sci-Tech award from the Academy.
Csilla AndersenThaddeus Beier
16:45 Distribution Servicing
Walt Husak (Dolby Laboratories, Inc., USA)
This paper will explore the use of a common interchange format for servicing distribution outlets with depth or disparity information. The relative locations of objects in a scene may vary throughout the production and post-production process and is somewhat fluid. This metadata should have a common interchange format and provide the necessary information for downstream distribution channels. These channels will include both the DCP for theatrical delivery and the IMF for home or mobile delivery. The characteristics of the distribution paths and their effect on the design of a common interchange format will be discussed.
Presenter bio: Walt Husak is the Director of Image Technologies at Dolby Labs, Inc. He began his television career at the Advanced Television Test Center (ATTC) in 1990 carrying out video objective measurements and RF multipath testing of HDTV systems proposed for the ATSC standard. Joining Dolby in 2000, Walt has spent the last several years studying and reporting on advanced compression systems for Digital Cinema, Digital Television, and Blu-ray. He has managed or executed visual quality tests for DCI, ATSC, Dolby, and MPEG. He is now a member of the CTO’s office focusing his efforts on 3D for Digital Cinema and Digital Television. Walt provides industry lectures on Digital Cinema systems, image compression and 3D. Walt has authored numerous articles and papers for a number of major industry publications.He currently chairs the SMPTE 3D Home Master Image AHG, the SMPTE Frame Packing AHG in SMPTE, DVB 3D Technology Providers Group and co-chairs the MPEG Frame Compatible AHG in MPEG.
Walt Husak

18:00 – 20:00

Opening Night Reception

Room: Exhibit Hall

Wednesday, October 23

09:00 – 10:30

Evolving Technologies for Broadcast Facilities – Part 1

Three 30-minute Papers

Room: Salon 1

Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
09:00 Playout Automation in a Virtual Environment
Eric Openshaw (Pebble Beach Systems & Pebble Beach Systems, USA); Glen Sakata (Affinis Advisory Group LLC & Affinis Advisory Group, USA)
As the consumer demand for video entertainment increases and technology evolves away from bespoke hardware solutions to process and deliver it, today’s digital media companies struggle to keep up with constantly changing business models. Process and Storage Clouds promise to augment and eventually replace the modern media factory of today using persistence and elasticity. But virtualization is more than installing a VM and loading software. The applications must be designed for virtual environments, reducing inherent complexity and able to keep up with your business objectives today and tomorrow.
Presenter bio: As the General Manager of Pebble Broadcast Systems, the US subsidiary of UK based Pebble Beach Systems, Eric Openshaw oversees all aspect of the business including sales, marketing, system installation and product support across all of North America. With strong technical roots in software engineering, Eric was key in the design and implementation of Marina, the company’s flagship Enterprise Level Automation. Eric is still heavily involved in R&D and has a passion for keeping the company ahead of the ever changing technology curve that so many media & entertainment vendors are facing. Eric finished top of his class in a Bachelor’s of Engineering degree from UNSW, Australia and shortly after transitioned to the broadcast industry. He has now accumulated over 10 years’ experience working in broadcast automation.
Presenter bio: Glen is responsible for engaging and evolving customer and client relationships across multiple verticals and technology platforms. Over the past 25 years, he has held key management positions in sales, marketing, and general management with industry leaders such as Vinten Broadcast, Louth Automation, Faroudja Labs, and Cisco Systems. During the past decade, Mr. Sakata held executive positions at Harmonic Inc. as VP Sales EMEA and VP Sales Americas. Glen was recently VP/GM of Pharos Communications – Americas and has been consulting for a variety of companies since 2011.
Eric OpenshawGlen Sakata
09:30 Mobile Emergency Alerting via ATSC Mobile DTV
James Kutzner (Public Broadcasting Service, USA); Wayne Luplow (Zenith Electronics, USA)
Technology is presented for the Mobile Emergency Alerting Service (M-EAS) delivered by broadcasters as part of ATSC Mobile Digital Television (MDTV). The M-EAS standard focuses on the broadcast emission system, the core of which is the stations’ emergency information collection /production systems. Processes to collect and deliver emergency messages within the nation’s emergency messaging infrastructure are described. CAP-based messages are made much more useful when enhanced by rich media that explain the emergency before, during or after events. Work-flow supporting this service and a review of the expected behavior and features that the MDTV receivers will provide are discussed.
Presenter bio: James Kutzner is Senior Director of Advanced Technology at PBS where he manages engineering and technical projects within PBS. Kutzner is a member of the ATSC Board and he chairs the ATSC Technology and Standards Group 3 on the development of ATSC 3.0. He holds a Masters degree in Engineering Management from George Washington University and a Bachelor of Electrical Engineering degree from the University of Minnesota. He is a Fellow of SMPTE and a member of the IEEE.
James Kutzner
10:00 Sending ads / offline content to affiliate stations using primary distribution feeds based on hitless compression standard with no cost or BW increase
Gustavo Marra (ATEME, USA)
The model of primary distribution for broadcasters (main station to affiliates) typically revolves around dedicated video links. These links use a very high bitrate, to preserve the video quality before final distribution, representing high costs, be it over satellite, fiber or IP. This paper shows how the standards-based technology called Piecewise CBR allows sharing this link without interruption to the main video signal at times when a bandwidth reduction is tolerable for the distribution of commercials and other contents, thus avoiding extra costs for this occasional use. Being an interoperable and video compression related technology, this solution allows the use of traditional video distribution infrastructure in TV stations. This technology enables a bitrate reduction of the primary distribution signal to include other feeds in parallel maintaining the total rate and the restoration of the original configuration after a certain period of time without perturbations to the main video feed (hitless).
Presenter bio: Graduated in Telecommunication Engineering, Post-graduated on Video Networks over IP and with a MBA on Project Management, Gustavo has been working for eleven years in the broadcasting market, occupying now the position of Director Applications Engineering, being responsible for all aspects of the technical execution and delivery of ATEME solutions in the Americas market. Before ATEME, Gustavo worked for TV Globo with the position of Project Manager. Gustavo won the 2008 Broadcasting Engineering Award and was chosen as the Best technical paper entry of IBC 2010 by IET review committee.
Gustavo Marra

Audio Developments for Cinema and Broadcast

Three 30-minute Papers

Room: Theatre (Chinese 6)

Chair: Thomas A Scott (Onstream Media/EDnet, USA)

Cinema Sound and Broadcast Audio will be the focus of this session covering three active areas of concern in 2013. The CALM act is mandatory now; a paper on the developments in the push for uniform volume of TV Programs and their interstitial messages will be presented by two of the leaders in this effort by both SMPTE and ATSC. The proliferation of Digital Cinema sound way beyond 5.1 channels is happening around the world; the development of an Open Format for Object-based multi-channel Audio for Digital Cinema will be presented. Finally, the ongoing SMPTE 25CSS effort to codify more modern methods and techniques for calibrating Cinema listening spaces is the background for a paper by a long-time practitioner that will describe the evolution of sound system analysis and problem mitigation over the past 30 years. After these three presentations, attendees should feel much more up to date on all these developments.

09:00 Are we CALM now?
J. Patrick Waddell (Harmonic Inc., USA); Jim Starzynski (NBC Universal, USA)
The ATSC issued an updated A/85 this year. Per the terms of the CALM Act, that document is now the “mandatory” one… What has been and will be the impact of this new revision to this key document? This paper will answer that and other questions of CALM’s implementation on broadcasters and MVPDs.
Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC’s TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC’s Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
Presenter bio: Jim Starzynski is Director and Principal Audio Engineer for NBC Universal Advanced Engineering, working on HDTV and overseeing audio technologies and practices for all NBC Universal broadcast and cable properties. He is responsible for establishing NBC’s audio strategy for DTV. Jim is chairperson of Advanced Television Systems Committee’s technical subgroup S34-2 on Next Generation Audio Systems and S6-3 on digital television loudness. He is on the board of directors for the home audio division of the Consumer Electronics Association and a member of SMPTE and AES. Jim gave expert testimony to Congress on the Commercial Advertising Loudness Mitigation Act, holds four Emmy Awards for technical achievement for multiple Olympic broadcasts and is the 2011 recipient of the ATSC’s highest technical honor, the Bernard J. Lechner Outstanding Contributor Award.
J. Patrick WaddellJim Starzynski
09:30 An Open Object-Based Immersive Audio Content Format
Ton Kalker (DTS, USA); Jean-Marc Jot (DTS, Inc., USA)
We introduce an open and future-proof multi-channel audio format designed for the creation, archiving and distribution of digital media content for the cinema, broadcast and gaming industries. The proposed format extends the existing multi-channel audio formats with the addition of a plurality of audio object channels accompanied with positional rendering metadata. We describe the basic concepts of the sound-field model, provide a brief introduction to the associated file and stream format, and report on the status of relevant standardization activities. The presentation also includes a demonstration of creation and playback tools supporting the format.
Presenter bio: Ton Kalker received both his M.S. and Ph.D. degrees in mathematics from the University of Leiden, The Netherlands, in 1979 and 1983, respectively. He has made significant contributions to the field of media security, in particular digital watermarking, robust media identification and interoperability of Digital Rights Managements systems. His research in this growing field started in 1996, submitting and participating in the standardization of video watermarking for DVD copy protection. His solution was accepted as the core technology for the proposed DVD copy protection standard and earned him the title of Fellow of the IEEE (2002). His subsequent research focused on robust media identification, where he laid the foundation of the Content Identification business unit of Philips Electronics (currently Civolution), successful in commercializing watermarking and other identification technologies. Dr. Kalker is co-author on 40+ granted patents and 40+ patent applications. Dr. Kalker is currently VP of Security and DRM at DTS. Prior to that he was VP of Technology for the Innovation Center of Huawei in Santa Clara, responsible for driving the media research program, focusing on real-time communication, media technologies for future Internet architectures, and HMI. Prior to Huawei, as a Distinguished Technologist at Hewlett-Packard Labs, he focused his research on the problem of non-interoperability of DRM systems. He was one of the three lead architects of Coral, publishing a standard framework for DRM interoperability in the summer of 2007. Subsequently, he co-chaired the Technical Working Group of DECE (http://www.decellc.com), publicly known as UltraViolet (http://www.uvvu.com). He also actively participates in the academic community. Dr. Kalker is Co-Founder, IEEE Transactions on Information Forensics (2005); Co-Founder and Chair, Information Forensics and Security Technical Committee (2006-2007); Guest Editor, IEEE Transactions on Signal Processing Supplement on Secure Media; Associate Editor, IEEE Transactions on Information Forensics and Security (2005-Present); Associate Editor, IEEE Transactions on Multimedia (2004-2005) (2011-Present); Associate Editor, IEEE Transactions on Image Processing (2011-Present); Associate Editor, IEEE Signal Processing Letters (2003-2004); Associate Member, Information Forensics and Security Technical Committee; Member, Image and Multidimensional Signal Processing Technical Committee (2000-2005); Member, Image, Video, and Multidimensional Signal Processing Technical Committee (2011-Present); Member, Signal Processing Fellow Evaluation Committee (2009-2011); Technical Program Chair, the first Workshop on Information Forensics and Security (WIFS-09 in London); Tutorial Co-Chair, ICME (2010); and Tutorial Co-Chair, ICIP (2011). Dr Kalker was part-time faculty at the University of Eindhoven, the Netherlands (1998-2004). Dr. Kalker has worked on a wide variety of topics related to media security, carefully balancing theoretical and practical aspects. Of particular importance are Ton’s contributions on the following: real-time video watermarking technologies on constrained platforms for active copyright enforcement; assessing the security of watermarking technologies, including secure watermark detection; watermarking for traitor tracing and forensics; secure signal processing (processing in the encrypted domain); limits and methods for reversible watermarking; robust hashing of audio, with an emphasis on efficient search strategies; semantic compression (compressed representations that maintain semantic significance); secure biometrics; interoperability of Digital Rights Management, based upon his work in Coral and DECE.
Ton Kalker
10:00 A perspective on the evolution of sound system equalization and its possible impact on the new standards being developed
John Murray (Optimum System Solutions, USA)
This paper examines how sound-system equalization has evolved from primarily feedback notching in sound reinforcement and electrical amplifier output curves in cinema, to 2-dimensional RTA (Real-Time Analysis) curves 30 years ago, to 3-dimensional, 2-port FFT-based neutral transfer functions today. A discussion of the shortcomings of the RTA method shows how the 3-D FFT approach solves these problems. For these modern transfer-function measurements, issues from non-equalizable, contaminating phenomena at short wavelengths are covered as well as two long-wavelength issues that cause problems for all measurement techniques. Potential solutions for all three of these issues are proposed for consideration to be incorporated into future equalization standards.
Presenter bio: John Murray is a 35-year sound-reinforcement industry veteran. He has a BS in Radio/TV Production & Engineering, Ohio University. He spent 13 years as a sound-system contractor and 13 years in product development and dealer training for EV, TOA, and Peavey MediaMatrix. He currently is a sound-system design and optimization consultant. He is a member of Audio Engineering Society (AES), National Systems Contractors Association (NSCA), InfoComm, SMPTE and Synergetic Audio Concepts (Syn-Aud-Con).
John Murray

Storing and Protecting Your Assets – Part 1

Three 30-minute Papers

Room: Salon 2

Chair: Paul Chapman (Foto-Kem Industries Inc., USA)
09:00 Archive eXchange Format: Interchange and Interoperability for Operational Storage and Long-Term Preservation
S. Merrill Weiss (Merrill Weiss Group LLC, USA)
The SMPTE standards community has developed the Archive eXchange Format (AXF) to enable interchange of archive media and records and interoperability of archive systems. AXF permits storing archive records on any type of medium and recovery using any type of computing platform, enabling replacement of system physical and software components without obsoleting archive records themselves. AXF improves robustness of records when the media on which they are stored become damaged and supports the transfer of archive records between archive systems and to and from remote (e.g., “cloud”) storage. AXF stores any number of files, of any size, and of any type, on any size and type of media. It also permits “spanning” to multiple media and updating of archive records even on write-once media. AXF is in the approval process and is technically stable. Its methods will be described in some detail.
Presenter bio: S. Merrill Weiss is a consultant in electronic media technology and technology management. In a 46+ year career, he has spent over 36 years involved in work on SMPTE standards. He participated in the earliest work on digital television and has been responsible for organizing or chairing many SMPTE technology-development and standards efforts since. Among other duties, he served four years as Engineering Director for Television; he co-chaired the joint SMPTE/EBU Task Force; and he currently chairs the Working Group on the Archive eXchange Format. Merrill is a SMPTE Fellow and has received the SMPTE David Sarnoff Gold Medal and the Progress Medal. He also was a recipient of the NAB Television Engineering Achievement Award, the ATSC Bernard Lechner Outstanding Contributor Award, and the IEEE Matti S. Siukola Award. Merrill holds four U.S. and two foreign patents. He is a graduate of the Wharton School of the University of Pennsylvania.
S. Merrill Weiss
09:30 What Cybercriminals don’t want you to know when you decide how to protect your Network
Francisco Artes (NSS Labs, Inc., USA); Stefan Frei (NSS Labs, Inc., USA)
Cybercriminals persistently challenge organizations’ networks through the rapid implementation of diverse attack methodologies, state of the art malware and exploits, and innovative evasion techniques. How can Broadcasters and Entertainment companies avoid implementing security processes that are based upon inaccurate security effectiveness claims? Do multiple layers of security technologies lessen the risk to our Assets or just waste our CAPEX? The presentation will show how we turn real-world testing results of leading security technologies into actionable intelligence that Broadcasters and Entertainment companies can use to model and evaluate the true security effectiveness of their organization’s specific security layers.
Presenter bio: Francisco Artes is a recognized information security executive who has helped form many of the best practices for securing intellectual property within the computer gaming, motion picture, and television industries. Mr. Artes is also known for his work on cybercrime, hacking, and forensic security with various federal, state and local government organizations as well as law enforcement agencies such as the US Dept. of Homeland Security, FBI, Texas Rangers, and US Marshals. Prior to joining NSS Labs, Mr. Artes served as Vice President, Chief Architect / Content Protection for Trace3, and as Vice President, Security Worldwide for Deluxe Entertainment Services Group. Mr Artes has presented on six of the seven continents and serves on several boards.
Francisco Artes
10:00 Latest Status of UMID Application Project in SMPTE
Yoshiaki Shibata (Chairs, SMPTE TC-30MR SG UMID Applications, Japan)
A project is ongoing at SMPTE Standard Community to enhance applications of UMID, the SMPTE standard globally Unique audiovisual Material IDentifier. While already identified the UMID Application Principles, the fundamental rules every UMID-aware product must strictly follow, which are to be reflected to upcoming revised SMPTE RP 205, how to realize the UMID resolution protocol to convert a given UMID into its corresponding URL is still under intensive study. In this paper, we report the latest status of the project and the feasibility study of DNS (Domain Name System) to be used as a basis of the UMID resolution protocol implementations.
Presenter bio: Yoshiaki Shibata is the President and Chief Consultant of metaFrontier.jp, LLC. He started his career at Sony in 1991. In 1998, Shibata joined the MPEG-7 (ISO/IEC 15938) standardization activity, in charge of the MPEG-7 schema design. Since 2001, Shibata started working in the M&E industry, where his initial contributions include successful implementations of UMID and EssenceMark (TM) for professional VTRs. In 2002, he joined the XDCAM (TM) project, played a crucial role in the metadata part of the product development, including applying for more than forty patents on metadata related technology, more than 75% of which have been already granted based on his own patent prosecution. In 2011, Shibata has left Sony to be the Japan’s first independent consultant for the media and metadata technology, and founded metaFrontier.jp, LLC in 2012. Shibata is an active member of SMPTE, working as a chair of the UMID application related projects, and also the member of FIMS, AMWA, EBU, ITE and IPSJ.
Yoshiaki Shibata

10:30 – 11:00

Break

Rooms: Salon 1, Salon 2

11:00 – 12:30

Evolving Technologies for Broadcast Facilities – Part 2

Three 30-minute Papers

Room: Salon 1

Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
11:00 Hybrid Broadcasting in the Internet Age – IP Core Network Concepts and Broadcast Market Exchange
Mark A. Aitken (Sinclair Broadcast Group, USA)
Globally, consideration is underway to develop and deploy non-backwardly compatible “Next Generation” DTV standards. The authors believe the opportunity to follow and learn from others allows for the creation of innovative broadcasting paradigms in the internet age. DTV technology as we know it today must change fundamentally, and moreover, must be harmonized with a virtualized IP Core (intelligent) network. To remain competitive and relevant, and to grow their business in the Internet age, Broadcasters must embrace the new challenges. The IP Core technology and largely exists today. It is mature and it is responsible for driving the mobile broadband revolution now evolving and competing with Broadcast Television’s core business. A new and unifying virtualized IP Core network architecture can enable a “Next Gen Broadcast Platform” (NGBP) for Hybrid Broadcasting in the Internet age. Interconnection of this new architecture can unleash the power of a new “Broadcast Market Exchange”.
Presenter bio: VP of Advanced Technology SINCLAIR BROADCAST GROUP (SBG) Baltimore, MD Biographical Information Mr. Aitken joined the Sinclair Broadcast Group in April of 1999. He is currently responsible for representation of the groups interests in industry technical and standards issues, DTV implementation (HDTV and Mobile), and represents SBG within ATSC, OMVC, Mobile500 and other industry related organizations. Mr. Aitken is the Chairman of ATSC TSG/S4, the specialist group responsible for Mobile DTV (Mobile/Handheld) standardization, and has been involved in the Broadcast industry’s migration to advanced services since 1987 when he first became involved with the FCC’s ACATS (Advisory Committee on Advanced Television Service) activities. Prior to his involvement with SBG, the COMARK Division of Thomcast (Thomson Broadcast) employed Mr. Aitken. He held many diversified positions within the organization including Manager of the Systems Engineering, RF Engineering and Sales Engineering groups, as well as Director of Marketing and Sales Support which included DTV Strategic Planning responsibilities. While with COMARK, Mr. Aitken was part of the “Emmy Award Winning Team” that revolutionized the Broadcast industry by bringing IOT technology to the marketplace. Mr. Aitken is a member of the AFCCE, IEEE and SMPTE, and serves as a member of the Technical Advisory Group with the Open Mobile Video Coalition (OMVC). He is the author of many papers dealing with innovative RF product developments, advanced digital television systems design and related implementation strategies, holds patents for various RF devices, and was a recipient of the “Broadcasting and Cable” Technology Leadership Award in 2008.
Mark A. Aitken
11:30 Towards A Hierarchy of SDI Data Rates
John Hudson (Semtech Corp, Canada); Edward Frlan (Semtech Corporation, Canada)
Once again our ability to capture and display images has leap-frogged our ability to transport, control and monitor them. New ultra high definition formats, and high frame rates, require increasing data rates in image transport. Payload rates approaching 200Gb/s are required in support UHDTV-2 image structures at 120Hz progressive frame rate. Building on the concepts presented in the paper “1080p50/60, 4K and beyond: Future Proofing the Core Infrastructure to Manage the Bandwidth Explosion” presented at the UHDTV: Ultra-High Definition Imaging Session of the 2012 SMPTE Annual Technical Conference, this paper introduces a hierarchical approach to increased SDI data rates, allowing affordable steps towards extended data rates, including those needed for UHDTV-2. It describes progress in technical standards and technology development since the 2012 conference, for single-link and multi-link SDI interfaces operating at 6Gb/s, 12Gb/s and 24Gb/s using protocols which enable easy compatibility between data rates. It shows the performance that can be expected at these data rates, over copper and optical interfaces, and introduces coding concepts to improve the reliability of serial video.
Presenter bio: John Hudson is Director of Product Definition and Broadcast Technology in the Gennum Products Group of Semtech Corporation. His responsibilities include technology strategy, product definition and international standardization for Semtech GPG’s video and datacom business. Hudson has spent 28 years in the broadcast industry beginning his career as a design engineer at Sony Broadcast and Professional Europe. He joined Gennum in 1999 and has been instrumental in developing the company’s video and multi-media semiconductor business. As an active member of SMPTE and SMPTE Fellow, Hudson serves as Co-chair of TC 10E – Essence, and Chair of the 32NF40 Working Group on SDI Mapping. He is the author of several SMPTE Standards, and actively contributes to the development of real-time streaming media interfaces for video and D-Cinema production. Hudson is actively involved in the formation and development of the HDcctv Alliance™ and as chair of the technology committee, his responsibilities include the development of all standards and compliance testing programs. He attained a HND in Electronics and communications engineering from the Farnborough College of Technology in 1988, is the author of 10 patents on video processing and signal integrity solutions for multi-media applications and regularly contributes technical papers and presentations to seminars and technology events in both broadcast and CCtv industries.
John Hudson
12:00 Live Stream Media Monitoring in the Cloud
Hiren Hindocha (Digital Nirvana, USA)
As live streams grow in popularity, so does the importance of reliable live stream monitoring. While networks are equipped with the necessary tools to monitor traditional broadcasts, they’re lacking the same for streamed media. The solution lies in an automated, scalable system that can reliably monitor the streamed programming of broadcasters. Broadcasters also have a need for online ad monitoring, as industry experts forecast a triple digit increase in ad growth for online media through 2017. TV stations must be able to log, record and monitor their streaming content to provide “proof of airing.” FCC web captioning requirements are further increasing the need for live stream monitoring. This paper will discuss how a cloud-based monitoring solution meets the sophistication of today’s content delivery environment and provides broadcasters with an innovative, streamlined way to meet all of their monitoring requirements. It will also touch upon the benefits of a cloud-based solution.
Presenter bio: Hiren Hindocha brings a 15+ year background in management and IT to Digital Nirvana, including more than 12 years of experience in a variety of technology based applications. Prior to co-founding the company, Mr. Hindocha served as Vice President of eCommerce at Go.com. He holds a MS in Computer Science from Cleveland State University, and a BE in Electrical Engineering from JNT University.
Hiren Hindocha

Storing and Protecting Your Assets – Part 2

Three 30-minute Papers

Room: Salon 2

Chair: Paul Chapman (Foto-Kem Industries Inc., USA)
11:00 Advanced Storage Techniques using Scalable Media
Heiko Sparenberg (Fraunhofer IIS, Germany)
Data storage is still one of the major bottlenecks in today’s post-production workflows. Improving the data-throughput by combining some disks in a certain RAID-configuration makes the system faster but more expensive. In contrast, scalable compression techniques can also be used to significantly increase the overall throughput of storage devices. This work gives an overview of the development of storage techniques especially designed for scalable media files such as JPEG 2000 and H.264 SVC. This paper introduces two techniques: (1) a data-relocation algorithm exploiting foreseeable access pattern to the media content. This technique, in combination with scalable media achieves an singnificant increase in I/O performance of HDDs by a factor of three and (2) a novel RAID configuration especially designed for scalable media allowing for guaranteed real-time performance of attached storage devices.
Presenter bio: Heiko Sparenberg, born in 1977, received his Diploma degree in Computer Science in 2004 and a Master degree in 2006. He joined Fraunhofer IIS in Erlangen as a software engineer in 2006. Today, Heiko is Head of the group Digital Cinema and responsible for several software developments, e.g. the easyDCP software suite for digital cinema. His research topics are scalable media-file management, post-production software in the field of Digital Cinema and image-compression algorithms, with focus on scalable codecs including JPEG2000 and H264.SVC.
Heiko Sparenberg
11:30 French Cinema Goes IMF
François Helt (Highlands Technologies Solutions, France); Benoit Fevrier (EVS-Opencube, France); Frantz Delbecque (Eclair Group, France); Xavier Brachet (Mikros Image, France); Hans-Nikolas Locher (Commission Supérieure Technique de l’Image et du Son, France); Marc Bourhis (FICAM, France); Cristian Garcia (EVS, USA)
The French government has recently funded a cinema digitization program to help make past and current film catalogues available in digital form. The program stipulates that the digital assets be delivered in a dedicated open file format. An ad hoc committee has been formed at the request of the Centre National du Cinema to review the issue. The IMF application 2 seems a natural for implementing the program on a practical level. The application meets the requirements of a demanding workflow while preserving the highest quality for film. It also addresses the need for interoperability and the diversity of the film sources.
Presenter bio: Hans-Nikolas Locher is in charge of research and development sector at commission supérieure technique de l’image et du son (CST). The CST is a french professional association of cinema, audiovisual and multimedia technicians and artists technicians. He accompanies the transition to digital operation of exhibitors and actively participated in the training of projectionists. He implements software tools for testing and validation of digital cinema files, and is the architect of the information system for monitoring the projections for the Cannes Film Festival.
Presenter bio: Cristian Garcia has been appointed as Business Development Manager for EVS’s Media division in The Americas. Based in EVS’s west-coast office in Burbank, California, Garcia will spearhead business development efforts, as well as provide pre-sales engineering and product specialist support for the sales team. Prior to joining EVS, Garcia held various management positions at Astrodesign, spanning product and technical marketing, and business development for its 4K and JPEG 2000 solutions. He was also Technical Sales Manager at Evertz, where he focused on its multi-viewer and video transport products.
Hans-Nikolas LocherCristian Garcia
12:00 Archives – “It’s a retrieval problem, not a storage problem.”
Josef Marc (Archimedia Technology Inc., USA); Chi-Long Tsang (Archimedia Technology Inc., Hong Kong); Victor Steinberg (VideoQ Inc., USA); Maxim Levkov (Artek Media International Inc., USA)
MXF standards SMPTE ST 377-1:2011 and ST 422:2006; SMPTE Technical Committee TC-31FS AHG ST 422 Revision (JPEG 2000 in MXF); the Advanced Media Workflow Association; and related SMPTE standards all clarify MXF implementations for the most active use cases. To illustrate technical advances, this paper presents laboratory observations of MXF video clips from multivendor sources alongside mathematically generated test patterns, master- and archival-grade video content, and an MXF metadata viewer. The paper focuses especially on reformatting JPEG 2000, uncompressed, AVI, and MOV master/archive files into MXF for interchange and preservation. Video clips are shown.
Presenter bio: Josef Marc co-founded Archimedia Technology with Mark Gray and Chi-Long Tsang. At Front Porch Digital and SAMMA he designed media archives, asset management, mass digitization, and online video publishing systems. He led Ascent Media’s technical part of launching Verizon FiOS TV, and the project office for Sony’s part in launching DirecTV. Consulting to Sony Corp he co-wrote a book on interactive TV and Web media. He designed archives at The United Nations Int’l Criminal Tribunal for Rwanda, and hosted workshops for the Assoc. of Moving Image Archivists. At Sony SIC he managed installations for CBS’ Olympics broadcasts, The Game Show Network launch, JumboTrons etc. He was CTO of ConnectOne, a triple-play CLEC offering IP video, telecom and Web services. He is a member of the SMPTE and the Assoc. of Moving Image Archivists.
Josef Marc

12:30 – 14:00

Fellows Luncheon

Room: Mount Olympus Room
12:30 Fellows Luncheon
Glenn Reitmeier (NBC Universal, USA)
SVP of Advanced Technology at NBC Universal
Presenter bio: Glenn Reitmeier is Senior Vice President of Advanced Technology at NBC Universal, leading the company’s technical efforts on industry standards, government policy, commercial agreements, anti-piracy operations and advanced engineering. Since joining NBC in 2002, Glenn was involved in the creation of NBC’s first high-definition cable channel, Universal-HD, launching DTV multicast programming and mobile broadcasting, and the distribution of NBCU content to new digital consumer devices, including PCs, game consoles and personal devices. Glenn served as Chairman of the Advanced Television Systems Committee (ATSC) from 2006-2009, which under his leadership developed the new ATSC-Mobile standard. He is President of the Open Authentication Technology Committee (OATC), and a Board member of NABA (North American Broadcasters Association). Prior to joining NBC Universal, Glenn spent 25 years in digital video research and development at Sarnoff Laboratories. He is widely recognized as a pioneering visionary, creator and architect of digital television. Early in his career, he was instrumental in establishing the ITU 601 component digital video standard, which is currently in worldwide use as the backbone of modern television broadcasting and production facilities. During the competitive phase of HDTV standardization, Glenn lead the Sarnoff-Thomson-Philips-NBC development of Advanced Digital HDTV, which pioneered the use of MPEG compression, packetized transport, and multiple video formats. Glenn was a key member of the Digital HDTV Grand Alliance, taking a leadership role in its formation and in all of its all technical decisions, communications with government and industry, and interoperability efforts that lead to establishing the ATSC digital television standard. Glenn is a Fellow of the SMPTE (Society of Motion Picture and Television Engineers) and is a recipient of the Progress Medal and the Leitch Gold Medal. He is also an inaugural member of the CEA’s (Consumer Electronics Association) Academy of Digital Television Pioneers. He holds over 50 patents in digital video technology and is recognized in the New Jersey Inventors Hall of Fame.
Glenn Reitmeier

14:15 – 15:45

Color Management

Three 30-minute papers

Room: Salon 1

Chair: Joseph Slomka (Foto-Kem Industries, Inc., USA)
14:15 Beyond BT.709
Maciej Pedzisz (British Sky Broadcasting Ltd, United Kingdom)
For years, BT.709 was successfully used to define the primary chromaticities of display devices and transformation to the Y’Cb’Cr’ color space that is used for the video compression. Current advances in display technologies, along with the introduction of UHDTV, makes the following question more relevant than ever before: Can we do better than BT.709? This paper tries to answer this question by highlighting methods of extending the color gamut, showing the difficulties in the transformation from one color space to another, and pointing out the different conversion methods used to represent RGB data in the Y’Cb’Cr’ color space. It emphasizes the importance of Constant Luminance Transform for color perception and compares BT.709 to BT.2020 from different viewpoints. Finally, the advantages and disadvantages of switching to BT.2020 are presented from a broadcast engineering perspective.
Presenter bio: Maciej Pedzisz received MSc and Engineer degrees in Telecommunications from Military University of Technology, Warsaw, Poland, in 1998; DEA and PhD in Electronics from University of Western Brittany, Brest, France, in 2003 and 2006 respectively. From 1998 to 2002, he worked as an Electronic Warfare Researcher (COMIT) for the Polish Army. After his PhD, he started research in the domain of nonlinear signal processing (Imperial College London, UK), and 3GPP wireless communications (Interuniversity Microelectronics Centre, Leuven, Belgium). From 2010 to 2011, he worked for Atmel where he was designing image processing algorithms and was coding them in firmware of AVR MCUs. In 2011, he joined British Sky Broadcasting as Senior Research Engineer, where his work oscillate around image and video processing, colorimetry, subjective quality assessment, human perception, and video compression. His interests include UHDTV, HEVC, BT.2020, and heterogeneous programming for CPU/GPU architectures.
Maciej Pedzisz
14:45 An IIF based Post Production Infrastructure Developed for Feature Film Production and Higher Education in Iraq
Christopher Woollard (University of Greenwich & BKSTS, BSC, United Kingdom)
This paper presents a digital post production distributed environment developed in the country of Iraq in order to enable the reintroduction of effective feature film production. A joint development between the Kurdish Ministry of Culture, Film Directorate and the University of Greenwich, London, the system has enabled effective post production to be carried out in the cities of Sulaymaniya, Erbil, Kirkuk and Bagdad. Initially supporting both ARRI Alexa, RED and conventional 35mm production, the unique problems of film production are presented along with techniques for combining the work of multiple crews using differing camera systems. High speed digital interchange and laboratory work is presented along with arrangements for digital grading using large Cinema systems and digital film distribution. Sample results will be screened showing how the system has been effectively used for Post Graduate training involving film schools in the cities mentioned above.
Presenter bio: Program Leader of Graduate Film School, University of Greenwich. Masters and PhD program leader, including the Masters degree in Cinematography and Post Production, University of Greenwich, London. Cinematographer, various feature films. Member of the BSC. Council member and Fellow of the BKSTS. Member of the SMPTE. Cinematographer to the Government of Iraq. Former member of technical staff, Thinking Machines and Principal Engineer, Wang Laboratories.
Christopher Woollard
15:15 Academy Spectra Software Tool: Color rendering analysis application
Jonathan Erland (Composite Components Company & Society of Motion Picture and Television Engineers, USA)
At the 2009 Conference, we introduced the AMPAS Sci-Tech Council SSL Project. We described the efficiency and discussed the color rendering drawbacks for LED’s. We have since expanded our capacity to assess the efficacy of new forms of lighting, even those for which the information existed only in the technical literature. We have built a computer modeling application which emulates the image forming chain, from light-source through camera to display. This paper will discuss how we will make that ability available to the community via an open source, multi-platform application version of our in-house analysis process. This will address the needs of the Director of Photography who grapples with what lights to deploy on a set, as well as address the needs of the luminaire designer who designs instruments to light our future films. Open source, it will empower the community to expand its usefulness for the benefit of all.
Presenter bio: From his student filmmaker days in London through industrial design work to his founding role in industry technical organizations, Visual Effects Society Fellow, Jonathan Erland has been engaged in both the dramatic and technical side of the story-telling process for over 50 years. A member of the Star Wars VFX crew, he has six patents and four Academy Awards for innovative technologies. A Life Fellow of SMPTE, he’s authored 20 papers, served as Program Chair, and received the Journal Award and Fuji Gold Medal. At AMPAS, he has served as a Governor, establishing Visual Effects as a branch. He’s also a member of the Science and Technology Council, Scientific and Engineering Awards and numerous other committees. He’s received an Academy Commendation for “solving High-Speed Emulsion Stress Syndrome in film stock” and the 2012 John A. Bonner Medal for “outstanding service and dedication in upholding the high standards of the Academy.”
Jonathan Erland

Advancements in Image Processing – Part 1

Three 30-minute Papers

Room: Salon 2

Chair: Jim DeFilippis (TMS Consulting, USA)

An eclectic collection of papers that dive deep into their topics related to the improvement of motion imagery, the post process work flow and best practices for best results. From concerns of filtering High Dynamic Range pictures to handling mixed frame rates in post production as well as how to provide dynamic calibration of images without using static test patterns, these are topics which will intrigue, illuminate and challenge the audience.

14:15 Filtering in a High Dynamic Range (HDR) Context
Gary Demos (Image Essence LLC, USA)
Many image processing steps involve filters or filter-like constructs (such as wavelets). Filters are used for displacement, such as flowfield displacement or block displacement within codecs. Filters are used for re-sizing, and for detail-band-splitting in resolution hierarchies. Wavelets also are applied in a manner similar to filters, and can be used for detail-band-splitting in some constructions. The Discrete Cosine Transform similarly represents image spatial frequencies in an array of coefficients for such frequencies. When extending filter and wavelet techniques to High Dynamic Range (HDR), the general assumptions about acceptability of filtering errors in many common filter uses must be revised to consider very large increases in brightness value differences. Filter errors, especially from negative filter lobes, are often greatly magnified within the intended viewing range. This paper explores filtering practices that improve filter, wavelet, DCT, and other common filtering elements when applied to HDR.
Presenter bio: Gary Demos is the recipient of the 2005 Gordon E. Sawyer Oscar for lifetime technical achievement from the Academy of Motion Picture Arts and Sciences. He has pioneered in the development of computer generated images for use in motion pictures, and in digital film scanning and recording. He was a founder of Digital Productions (1982-1986), Whitney-Demos Productions (1986-1988), and DemoGraFX (1988-2003). He is currently involved in digital motion picture camera technology and digital moving image compression. Gary is CEO and founder of Image Essence LLC, which is developing wide-dynamic-range codec technology based upon a combination of wavelets, optimal filters, and flowfields.
Gary Demos
14:45 Advancements in Image Processing: Adaptive File based Standards Conversion of Mixed Cadence Material – Towards the Holy Grail?
Bruce Devlin (AmberFin, United Kingdom)
Strange as it may seem, some content today is shot digitally in film mode and then edited at video rates. This may be accidental, may be 1980s archive content or may result from ignorance of the consequences of this editing process. After all, it look’s just fine in the editor. Maybe only 10% of content exhibits this symptom, but it makes 90% of the problems downstream. Advances in workflow, metadata handling, image processing and software speeds are all coming together to bring tools to the market that could not be built economically 10 years ago. One such tool has now been developed to address the need to handle multi-frame rate workflows elegantly and in high quality. This paper introduces a novel software approach for handling the material that achieves both spatial and temporal fidelity in a completely automated or manual assisted fashion depending on the value of the material.
Presenter bio: Bruce Devlin has been working in the Media industry for 25 years. In his career he has designed RF antennas, circuit boards, FPGAs, ASICs, hardware systems, video algorithms, compression algorithms, software applications, software systems and media workflows. He has worked for the BBC, Thomson, Snell & Wilcox and is currently the Chief Technical Officer of AmberFin where he leads their technology strategy and helps media organisations around the world improve their businesses by migrating to file based working. Although a technologist at heart Bruce never forgets that profitable, successful media companies, that show great content, pay for all the technological toys he invents. Bruce is an alumni of Queens’ College Cambridge, a member of the IABM, a fellow of SMPTE, has won many technology awards, author of many patents, co-designer of the MXF specification and has written books and standards that help to drive the professional media industry forwards.
Bruce Devlin
15:15 Camera Radiometric Calibration from Motion Images
Ricardo Figueroa (Rochester Institute of Technology, USA); Jinwei Gu (Columbia University, USA)
We present a research study for estimating a camera’s radiometric response function from a series of motion images. Current methods to estimate this function rely on multiple static images taken under different exposures or different lighting conditions, the use of measurement charts, image sets from point-and-shoot only cameras, or assume a simplistic model of the function’s shape. All these become impractical when it comes to having a robust efficient method that can fit into motion picture industry post-production workflows, ACES as an example, where simplicity and accuracy are very important. In this paper we present a research methodology based on the work of Lin et al. expanding it to the use of measured color characteristics in a motion image sequence and include additional constraints during the estimation. Using footage from an ARRI D21, preliminary results are presented and discussed.
Presenter bio: Ricardo is currently Assistant Professor for the Motion Picture Science Program at Rochester Institute of Technology (RIT). Ricardo joined RIT after working at Kodak for 10 years, where he held positions as Film and Digital Lab Manager, Digital/Hybrid Technologies Regional Director, Digital Imaging Engineer and Kodak Operating System Manager. Ricardo holds a BSEE and MSEE from the University of Puerto Rico and is currently pursuing his PhD in Computing and Information Sciences at RIT. Outside of work, Ricardo is an avid triathlete, soccer player and golfer. He is active in professional and civic associations, currently serving as chair for the Society of Motion Picture and Television Engineers (SMPTE) Rochester professional chapter and various community organization committees. Ricardo is also a past president and board member of Kodak’s Hispanic Organization for Leadership and Advancement (HOLA), and of the Society of Professional Hispanic Engineers (SPHE), Rochester chapter.
Ricardo Figueroa

15:45 – 16:15

Break

Rooms: Salon 1, Salon 2

16:15 – 17:45

File Based Workflows

Three 30-minute Papers

Room: Salon 2

Chair: Paul Chapman (Foto-Kem Industries Inc., USA)
16:15 Faxes, emails, pagers, and The Macarena – Adios to Relics of the ’90s
Christopher J Lennon (MediAnswers, USA); Angela Tietze (Entertainment Communications Network, Inc., USA)
You may laugh, but the first two items in this paper’s title are still in daily use as data exchange methods in today’s media organizations. How can that be? The ’90s were 20 years ago. The fact remains, however, that the state of the art for the exchange of traffic instructions (which spots to air when) remains fax and email. SMPTE’s Broadcast eXchange Format (BXF) working group has set out to fix this. Automating the flow of traffic instructions from ad agencies to media organizations is perhaps the most anticipated BXF 3.0 goodie. Key industry players have put in countless hours to create an XML schema for the exchange of this data that can be used in modern service-oriented architectures. We’ll explore what has been done, and look at the potential impacts on media organizations over the coming years. We’re hoping this also means we’ll never hear “The Macarena” again.
Presenter bio: Chris Lennon serves as President & CEO of MediAnswers. He is a veteran of over 25 years in the broadcasting business. He also serves as Standards Director for the Society of Motion Picture and Television Engineers. He is known as the father of the widely-used Broadcast eXchange Format (BXF) standard. He also participates in a wide array of standards organizations, including SMPTE, SCTE, ATSC, and others. In his spare time, he’s a Porsche racer, and heads up a high performance driving program.
Christopher J Lennon
16:45 North American Broadcasters Association Study of Issues in file based interoperability and watermarking
Clyde Smith (FOX NE&O, USA); Thomas Bause Mason (NBCUniversal, USA)
The NABA Technical Committee initiated a working group to focus on issues in File Based workflows and watermarking, this paper would provide an overview of the issues they covered and they recommendations. As the demand for flexible access to television content by consumers is increasing, the modes and mechanisms used by consumers to view this content are rapidly evolving. Consequently, the business and distribution models used to meet these needs are also changing. The industry continues to be in transition to file based workflows under no particular standard. The multitude of standards, file formats, and frame rates are leading to an explosion in content creation and distribution cost. Industry consensus indicates that the technology used to deliver linear content is not scalable to so many formats. What steps can the industry take to address these issues? This paper will provide and overview of issues, current activities and potential solutions.
Presenter bio: Clyde Smith is the Senior Vice President of New Technologies for FOX Network Engineering and Operations. In this role he is supporting Broadcast and Cable Networks, Production and Post Production operating groups in addressing their challenges with new technologies focusing on standards, regulations and lab proof of concept testing and evaluation. Prior to joining FOX he was SVP of global broadcast technology and standards for Turner Broadcasting System, Inc. where he provided technical guidance for the company’s domestic and international teams. He previously held positions as SVP of Broadcast Engineering Research and Development at Turner, SVP & CTO at Speer Communications, and Supervisor of Communications Design and Development Engineering for Lockheed Space Operations at the Kennedy Space Center. Smith also supported initiatives for Turner Broadcasting that were recognized by The Computer World Honors program with the 2005, 21st Century Achievement Award for Media Arts and Entertainment and a Technology and Engineering Emmy Award for Pioneering Efforts in the Development of Automated, Server-Based Closed Captioning Systems. In 2007 He received the SMPTE Progress Medal and 2008 he received the Storage Visions Conference Storage Industry Service Award.
Presenter bio: Thomas Bause Mason is Director for Advanced Digital Media Technology in Advanced Engineering at NBCUniversal. As part of the Advanced Technology Team, Thomas researches and implements emerging technologies as well as standards for file based workflows for production, post production and distribution, including traditional broadcast, cable and the web. In addition, Thomas works with many different NBCUniversal operational teams, implementing standards and technology at the corporate level. Formerly, Thomas was with Ascent Media where he ran encoding and DAM system ingest operations and helped business development with implementing new media workflows. Previously, Thomas worked in Germany as a programmer developing process visualization software and database solutions. He also managed a Quality Control department for several private broadcast networks at a large broadcast service provider. Thomas has a bachelor’s degree in Media Technology. He is a SMPTE member since 2007 and is currently the co-chair for the SMPTE 35PM Media Packaging and Interchange Technology Committee as well as the Chair of the SMPTE Studio Group on UHDTV Ecosystems.
Clyde SmithThomas Bause Mason
17:15 Using LTO and LTFS for File Based Program, Graphics, and Footage Delivery
Josh Derby (Discovery Communications, USA); Bert Collins (Discovery Communications, LLC, USA); Charles Myers (Discovery Communications, LLC, USA); Adam Weyl (Discovery Communications, LLC, USA)
In 2013 Discovery Communications began using LTO tapes to receive programs, graphics, and footage from the large community of production companies who create programs for Discovery’s networks around the world. In our presentation we will discuss the advantages and challenges of LTO delivery, the process and experience of transition for Discovery and its production community, and new tools that Discovery and its partners developed to make effective file based LTO workflows.
Presenter bio: As part of Discovery’s Development and Technology team, Josh Derby develops new solutions for Discovery’s production, post production, and distribution efforts worldwide. As Discovery’s professional media research and standards team, Discovery Development and Technology sets the company’s standards for technology and workflows. The group also works with manufacturers and industry leaders to develop new products and technologies that support Discovery’s global family of networks. The group’s current research is focused on Ultra High Definition production, post, and distribution, as well as the implementation of end-to-end file based workflows. Josh has held a number of roles in his 13 years at Discovery. He has managed several groups within Discovery’s post production operations. He played a major role in the 2011 launch of 3Net, Discovery’s 24-hour 3D channel and the launch of Discovery HD Theater in 2002. Before moving to Development and Technology in 2007, Derby served as the Director of Discovery’s internal training and workflow group. Prior to Discovery, Derby worked as a member of the engineering team at WHD-TV, the model high definition television station, in Washington DC. He worked alongside many manufacturers and engineers as the industry took some of its first major steps towards the adoption of HDTV. Derby holds a bachelor’s degree in Media Arts and Design with concentrations in television production and digital audio production from James Madison University in Harrisonburg, Virginia.
Presenter bio: Bert Collins is the Director of Technology and Standards for Discovery Communications. In this role he facilitates media research and development for Discovery’s global media, technology, and production operations division. The R&D team focuses on steering technical evolution and standards development as well as helping to position Discovery to benefit from emerging technologies. In the 15 years Bert has been with Discovery he has been a major contributor to a number of technical initiatives involving the migration from analog to digital media as well as providing technical guidance for the launch of emerging networks such as 3net, Discovery’s 3D network.
Josh Derby

Stereoscopic 3D Imaging, Processing, Distribution and Display

Three 30-minute Papers

Room: Salon 1

Chair: Kevin J Stec (Dynamic Digital Depth, Inc., USA)

Presenting stereoscopic content does not always provide expected results. We look at how the brightness of the image is affected by frame sequential viewing, what happens when the frame rate is increased, and how the perceived size of objects can change with the viewing angle.

16:15 Your eyes don’t do the math: Effect of temporal display protocols on perceived brightness
Zoltan Derzsi (Newcastle University, United Kingdom); Sindre Henriksen (Newcastle University, United Kingdom); Jenny C. A. Read (Newcastle University, United Kingdom)
Many stereoscopic 3D displays rely on temporal multiplexing, in which the image is presented to each eye in alternation. Since this alternation is fast enough that no flicker is perceived, the visual system is assumed to integrate luminance over time. This implies that a temporally multiplexed display with a 50% duty cycle would need to have twice the physical luminance of a non-multiplexed display in order to appear equally bright. However, this assumption has not been tested, and we find it is incorrect even at very high frequencies. We discuss the implications for image quality.
Presenter bio: Zoltan is an electronic engineer, open source enthusiast and qualified HAM radio operator. He is studying Neuroscience at Newcastle University. Notable interests are HF and free-space optical communication, embedded systems, stereoscopic vision science, neural networks, and electrophysiology.
Zoltan Derzsi
16:45 Study on the acceptance of Higher Frame Rate Stereoscopic 3D
Wolfgang Ruppel (RheinMain University of Applied Sciences, Germany); Yannic Alff (RheinMain University of Applied Sciences, Germany); Thomas Goellner (RheinMain University of Applied Sciences, Germany)
Higher Frame Rate (HFR) Stereoscopic 3D in Digital Cinema has been proposed by James Cameron in 2011. Based on the HFR material provided by James Cameron and additional animation footage, RheinMain University has conducted a series of subjective tests on the acceptance of HFR stereoscopic 3D. The tests have been conducted using a digital cinema projection system with Integrated Media Block. The additional animated HFR footage has been created in an inter-disciplinary cooperation. The results of the study show a clear preference for HFR amongst the participants of the subjective tests. Of particular interest was to learn about the difference in acceptance and reception between 48 fps/eye and 60 fps/eye. Finally, the down-conversion from 48 fps/eye and 60 fps/eye to standard 24 fps/eye has been compared to footage natively shot with 24 fps/eye. The animation HFR footage created by RheinMain University will be available for screening during the session.
Presenter bio: Prof. Dr. Wolfgang Ruppel has more than 10 years of experience in Digital Cinema and high quality imaging applications. Since 2006, he holds a professorship for Media Technology at RheinMain University of Applied Sciences. From 1994 until 2006 he was with research & development units of Deutsche Telekom. During his time at T-Systems, he was heading the development of a Digital Cinema server and a satellite distribution platform. Wolfgang Ruppel received the Diploma degree and the Dr.-Ing. degree in Communications Engineering from the Technical University of Darmstadt, Germany, in 1989 and 1994, respectively. Wolfgang Ruppel is a member of SMPTE, ITG (Informationstechnische Gesellschaft). Since 2007, he is leading the ITG working group 3.4 “Film Technologies” of ITG.
Wolfgang Ruppel
17:15 Controlling Miniaturization in Stereoscopic 3D Imagery
Michael D. Smith (Wavelet Consulting LLC, USA); Jason Malia (Georgia Institute of Technology & Warner Bros Entertainment Inc., USA)
Viewers of stereoscopic 3D imagery can perceive the absolute size of objects within a scene. On larger screens, the perceptual size of objects commonly appears bigger than reality which matches viewers’ expectations for big-screen “larger than life” theatrical experiences. The geometry involved in stereoscopic imaging can cause the perceptual size of objects to appear smaller than reality (“miniaturization”). Miniaturization can be distracting for viewers and is more extreme on smaller screens like 3DTV and handheld 3D devices. A common misconception is that miniaturization occurs only when the stereo camera separation (interaxial) is larger than the human eye separation (interocular) 2.5 inches. Counter-examples of this misconception will be provided as well as an analysis framework that allows stereo-camera-operators to accurately predict when the miniaturization effect will occur on both larger theatrical screens and smaller screens like 3DTVs and handheld 3D devices. Example 3D-images will be shown illustrating control of perceptual size.
Presenter bio: Michael D. Smith is a consultant working in the areas of digital imaging, signal processing and intellectual property, with recent work for organizations including Warner Bros., Sony Pictures Entertainment, other Hollywood studios and National Oceanic and Atmospheric Administration (NOAA). In addition to his technical work, Michael also performs intellectual property consulting related to infringement and validity analysis of patents. From 2008-2013, he worked on several patent cases for TiVo that resulted in verdicts, judgements and settlements totaling $1.6 billion. He has also worked on matters for other clients including Research in Motion, DivX Networks, Thomson, Polycom, MTV Networks, Citrix Systems and SportVision. This work often involves computer software source-code analysis and the search and analysis of prior art. Michael was editor of the book “3D Cinema and Television Technology: The First 100 Years” published by SMPTE in 2011. Michael is a member of the Board of Editors of the SMPTE Motion Imaging Journal and is a peer-reviewer for IEEE Transactions on Image Processing. He is a member of the professional organizations SMPTE, SPIE and AES. He received the B.S. and M.S. degrees in Electrical Engineering from UCLA in 2001 and 2004 respectively. Michael can be reached via email at miksmith@miksmith.com
Michael D. Smith

18:00 – 18:30

Annual Membership Meeting

Room: Salon 1

Thursday, October 24

09:00 – 10:30

Advancements in Image Processing – Part 2

Three 30-minute Papers

Room: Salon 2

Chair: Sara Kudrle (Miranda Technologies & SMPTE Western Region Governor, USA)

Compressed into these papers is the leading edge thinking on encoding moving images, whether at the high end (4k) or for the vast Internet or the consumer world. How will HEVC enable high quality delivery in the contribution level of production? Will there be a universal codec for delivery over the Internet? What are the requirements beyond HDTV for visually loss-less encoding of TV images for delivery to consumer displays? These questions should provide the audience something to chew over before lunch.

09:00 New generation of Contribution Services using the new HEVC 422 Profile, for 4K format
Juan Jose Anaya (SAPEC, Spain); Damian Ruiz (Universitat Politècnica de València, Spain)
The first version of new High Efficiency Video Coding (HEVC) video compression standard was completed in January 2013. HEVC was born with the aim to achieve the successful of previous standard, the H.264 AVC, for the emerging services with resolutions beyond HD, such as the 4k and 8k formats. Three profiles have been approved supporting 420 chroma format and 8-bit and 10-bits pixel depths. In July 2013 a new Draft of HEVC will be release, named “Range Extensions”, in order to cover the needs of high quality professional production, supporting the 4:2:2 and 4:4:4 chroma formats, and pixel depths beyond 10-bit. This paper addresses the performance of HEVC using the new 422 profile, and will compare to H.264 “High 422 Profile”, targeting for contribution services. The simulation results for 4k formats, will reveal the bandwidth saving that broadcasters and network operators can achieve using the new professional profile of HEVC.
Presenter bio: Damián Ruiz Coll received the M.S. degree in Telecommunications Engineering from Polytechnic University of Madrid (UPM), Spain in 2000. He is PhD candidate in Computer Science, doing a doctoral research stay at the Florida Atlantic University (FAU), of United States, during 2012. Nowadays, he is working as a researcher at Mobile Communication Group (MCG) of the Institute of Telecommunications and Multimedia Applications (iTEAM), and his research focuses on the real time optimization of HEVC (High Efficiency Video Coding), for broadcasting and mobile networks. He participates as a member of DVB (Digital Video Broadcasting), FOBTV (Future of Broadcast TV), and “Beyond HD” group of EBU (European Broadcasting Union). He has more than 15 years of experience as engineer at Public Spanish Broadcaster (RTVE), and he was involved in R&D projects, including collaborations with international committees as DVB, EBU, DigiTAG, and the “HDTV Spanish Forum” of Ministry of Industry. He has collaborated at several EBU video coding Test Plan for Video Quality Assessment of new generation of Production and Contribution codecs.
Damian Ruiz
09:30 A technical overview of VP9 – The latest open-source video codec
Debargha Mukherjee (Google, Inc., USA); Jim Bankoski (On2 Technologies, USA); Ronald Bultje (Google, Inc., USA); Adrian Grange (Google, Inc., USA); Jingning Han (Google, USA); John Koleszar (Google, Inc., USA); Paul Wilkins (Google, Inc., USA); Yaowu Xu (Google, Inc., USA)
Google has recently finalized a next generation opensource video codec called VP9, as part of the libvpx repository of the WebM project (http://www.webmproject.org/). Starting from the VP8 video codec released by Google in 2010 as the baseline, several enhancements and new tools were added, resulting in the next-generation bit-stream VP9 that was finalized in June 2013. This paper provides a technical overview of the coding tools included in VP9 along with the motivation for their inclusion. Coding performance comparisons with other state-of-the-art video codecs H.264/AVC and HEVC will be presented on standard SD and HD test sets.
Presenter bio: Dr. Han obtained his B.E. in Electrical Engineering from Tsinghua University in 2007, and M.S. and Ph.D. in Electrical and Computer Engineering from University of California Santa Barbara in 2012. He then joined the WebM codec team at Google Inc. His research interests include video compression, scalable coding, and stream networking. He received outstanding teaching assistant awards and dissertation fellowship from the Department of Electrical and Computer Engineering at UC Santa Barbara, and best student paper award of International Conference on Multimedia and Expo (ICME) 2012.
Jingning Han
10:00 Developing requirements for a visually lossless display stream coding system open standard
Dale Stolitzka (Samsung Electronics, USA)
VESA (Video Electronics Standards Association) is standardizing a visually lossless coding system to be used for compression of high-bandwidth display streams. This coding system will complement modern display link technologies to convey higher data rate to displays or save power or both. The combination will meet the increased bandwidth needed in high resolution displays coming in the next few years. This standard is due for release in early 2014. This paper recounts VESA’s process that led to compression requirements and the selection of a coding system baseline, reports the progress toward an open standard and develops strategies that can evaluate a visually lossless coding system for consumer electronics and commercial displays.
Presenter bio: Dale Stolitzka is the Principal Engineer at the Samsung Display America Lab in San Jose, CA, who leads interface standardization. He chairs the VESA Display Stream Compression Task Group and is active at other industry standards-setting organizations including the MIPI Alliance, and ISO/IEC working groups. Mr. Stolitzka’s experience spans coding systems, consumer electonics displays and advanced transport mechanisms, such as VESA DisplayPort, MIPI Display Serial Interface and JPEG 2000 over MPEG-2 TS. Before joining Samsung, he has developed computer and consumer electronics systems and mixed-signal system designs at Analog Devices, National Semiconductor, Maxtor and Raytheon. Mr. Stolitzka holds a B.S. degree in Applied Physics and M.Eng. degree in Materials Science and Engineering, both from Cornell University.
Dale Stolitzka

Cloud-based Systems – Part 1

Three 30-minute Papers

Room: Salon 1

Chair: Don Craig (Arboretum Studios, USA)

The Cloud – an extensible virtual datacenter – offers the promise of scalability and robustness without all the physical datacenter headaches. Achieving that promise can be hard, with a number of widely publicized system outages in the last year,and the amount of work it takes to deploy in the cloud varies dramatically. These sessions examine current cloud offerings, and attempt to assess the practicality of cloud-based media workflows using Infrastructure, Platform, r Software as a Service offerings.

09:00 Deploying video platforms in the cloud
Andrew Sinclair (News Corporation / Sinclair Media Technology, Australia)
The cloud is transforming IT infrastructure and much like “IT hardware” transformed broadcast equipment the “cloud” is set to have a major impact on the broadcast industry. There are now many mature cloud platforms in Australia and combined with the relatively low cost of fibre optic networking within metro areas this presents a major opportunity to leverage the cloud for video workflows. This paper looks at many of the possibilities and challenges the cloud presents from production, edit, asset management, playout and distribution.
Presenter bio: Developed an interest in video studying computer science and working with very early digital genlock devices and rendering 3D animations. First worked on live internet audio broadcast in 1997 followed by live video broadcast in 1998. Won Australian Internet Award for best sports sites for these live audio and video features. Built Australia’s first premium online movie offering for BigPond Movies and introduced many large Australian companies to the concept of a Video Service Provider. Worked on multiple mobile TV platforms in the early stages of development. Architect for the infrastructure that supported the role out of Telstra’s Tbox application as well as building the multi-screen headend that supports Foxtel Mobile, T-Box and Telstra’s connected TV services. Recently developed the architecture for News Corp to take to market their new premium video offering built on the back of Foxsports content.
Andrew Sinclair
09:30 Private Patching in the Cloud: Offering the Media Industry a Mind of Its Own
Robert Jenkins (CloudSigma, USA)
With the media industry continuing to innovate, offering consumers more life-like images and mind-bending special effects, many production companies have turned to the public cloud to meet increasing compute demands. Despite the advantages, security fears have kept 50 percent of companies grounded in their on-premise environment; thereby missing out on the cloud’s benefits. Many organizations think that public cloud access requires data transit via the Internet and therefore makes it prone to data breaches. Are they justified? This presentation explains how private patching is changing the way media providers engage with the public cloud, allowing them to directly connect their dedicated infrastructure transparently into private networks within the public cloud, thus avoiding any public IP usage altogether. Now, media companies can reap the cloud’s scalable on-demand compute and storage benefits, treating the public cloud as a ‘remote brain,’ while retaining data storage on their own equipment.
Presenter bio: Robert Jenkins is the co-founder and CEO of CloudSigma and is responsible for leading the technological innovation of the company’s pure-cloud IaaS offering. Under Robert’s direction, CloudSigma has established an unprecedented open, customer-centric approach to the public cloud.
Robert Jenkins
10:00 Building Scalable Media Workflows on the Cloud
Bhavik Vyas (Amazon Web Services & SMPTE Member, USA)
This session walks through approaches for cloud based media ingest, storage, processing and delivery scenarios on the AWS cloud. We cover solutions for high speed file transfer, cloud-based transcoding, tiered storage, content processing, and global low latency delivery, as well as the orchestration and management of the entire media workflow. Attendees can expect to come away with an understanding of best practices for architecting and deploying cloud-based media workflows using native services and 3rd party solutions.
Presenter bio: Bhavik has worked in the communications field for over 15 years, at leading technology companies like Amazon Web Services (AWS), HP/Agilent, Reliance Communications and Aspera. Bhavik started his career in product management at HP in Scotland, and has held a variety of sales engineering, bus dev and product management roles. Bhavik spent 4 years at Aspera, where he was Director of Cloud Services & Partnerships, working with companies like Netflix, Amazon Unbox, Sony, WB, and Deluxe. He joined AWS in July 2012 where he is now responsible for the AWS global M&E eco-system. Bhavik has a B.Eng. (EE) degree from Heriot-Watt University, Edinburgh; and an MBA from Golden Gate University in San Francisco.
Bhavik Vyas

10:30 – 11:00

Break

Rooms: Salon 1, Salon 2

11:00 – 12:30

Advancements in Image Processing – Part 3

Three 30-minute Papers

Room: Salon 2

Chair: Sara Kudrle (Miranda Technologies & SMPTE Western Region Governor, USA)

Make sure to buckle your seat belts for this session! First we go into the future of SuperHiVision (8K TV) with the development of a novel real time HEVC encoder. Then we return to the exciting present of dynamic video production (think camera on a skateboarder’s helmet) and the latest SMPTE Mezzanine compression standard VC 5. Finally, we’ll pause and reflect on how to assess video quality in the new world order of compression and how best to tune the encoding based on video quality in the face of suboptimal transmission conditions (dropouts).

11:00 Development of the Super Hi-Vision HEVC/H.265 real-time encoder
Yasuko Sugito (NHK, Japan); Kazuhisa Iguchi (NHK, Japan); Atsuro Ichigaya (NHK, Japan); Kazuhiro Chida (NHK, Japan); Shinichi Sakaida (NHK, Japan); Yoshiaki Shishikui (NHK, Japan); Hiroharu Sakate (Mitsubishi Electric Corporation, Japan); Takayuki Itsui (Mitsubishi Electric Corporation, Japan); Nobuaki Motoyama (Mitsubishi Electric Corporation, Japan); Shun-ichi Sekiguchi (Mitsubishi Electric Corporation, Japan)
This paper introduces the world’s first Super Hi-Vision (SHV) real-time encoder incorporating the HEVC (High Efficiency Video Coding)/H.265 scheme. A test broadcasting using the SHV HEVC codec is scheduled for 2016. HEVC is the newest video coding standard, and is capable of achieving approximately twice the level of compression of the existing AVC (Advanced Video Coding)/H.264 scheme. Its coding performance is expected to be particularly high for SHV videos, especially due to extended block partitioning. In this paper, we describe the fundamentals of HEVC and its suitability for high-resolution videos such as SHV. We also introduce the specifications of the developed SHV encoder and its approach to achieving real-time processing. Finally, the results of image quality evaluation are presented, which confirm that the newly developed encoder can achieve a higher image quality than a conventional SHV AVC encoder.
Presenter bio: Yasuko Sugito is currently with NHK (Japan Broadcasting Corporation) Science & Technology Research Laboratories, Tokyo, Japan, researching video compression algorithms on Super Hi-Vision (8K). Her current research interests focus on image quality improvements and speeding ups for high efficiency video coding (HEVC) especially at high resolution images.
Yasuko Sugito
11:30 VC-5 Video Compression for Mezzanine Compression Workflows
Ed Reuss (Unafilliated, USA)
The new SMPTE video compression standard ST 2073 “VC-5 Mezzanine Level Compression” is a variable-bit-rate codec originally developed for video acquisition and post-production, applicable to a diverse range of image and video formats. The term “Mezzanine Level Compression” refers to a lightly compressed video format, usually compressed from one half to one tenth the uncompressed format. Mezzanine compression is used for workflows where the image sequence will need to be decompressed and recompressed multiple times, while still minimizing the accumulated compression artifacts. There are several different video compression standards used for mezzanine-compressed workflows. Each one offers a specific set of trade-offs (compression ratio, fidelity, speed, power consumption, image formats, etc.) that makes it suitable for specific situations. This paper introduces the new VC-5 standard and describes its advantages for certain acquisition and post-production workflows.
Presenter bio: Edward Reuss is an independent consultant specializing in video, audio and Wi-Fi networks, particularly for very low latency applications. Earning his MSEE at Colorado State University, Ed started in test and measurement, for Hewlett Packard (Agilent), Tektronix and Wavetek. He worked at General Instrument on the Eurocypher project for British Satellite Broadcasting (BSB). After several years developing scientific instruments at Scripps Institution of Oceanography, he was a Director of Systems Engineering at Tiernan Communications, developing real-time MPEG-2 video encoders for DSNG and network distribution. He switched to consumer products as a Principal Engineer in Plantronics’ Advanced Technology Group, where he developed several advanced technology prototype headsets incorporating DSP, Bluetooth and Wi-Fi. Since then, he has consulted for several clients, including GoPro, Clair Global and TiVo. Ed is active in the SMPTE Standards Community, a senior member of the IEEE and voting member of the IEEE 802.11 Working Group.
Ed Reuss
12:00 Analyzing and Optimizing Video Quality in the new H.265 (HEVC) Standard
Advait M Mogre (Interra Systems, USA); Bhupender Kumar (Interra Systems, USA); Shekhar Madnani (Interra Systems, India); Muneesh Sharma (Interra Systems, India); Shailesh Kumar (Interra Systems, India)
The emerging H.265(HEVC) standard for video compression is geared towards providing high quality video at low to moderate bit rates; thus providing a significant increase in compression efficiency over existing standards like H.262/263/264. With increased compression, comes decreased source coding redundancy. Hence from a video quality perspective, of interest would be the nature of potential coding artifacts as well as the profile of video dropouts due to uncorrectable coding errors. The increased complexity of HEVC provides the user with additional control parameters (like the Tile based video data structure, the SAO in-Loop filter) that affect picture quality. By monitoring video quality for a variety of content (like sports, news/talk shows etc…) and under varying SNR profiles, such parameters could then be optimized for each application, keeping in mind the nature of video failures or dropouts that could potentially occur; so as to optimize for video quality in a given situation.
Presenter bio: Advait Mogre is currently a Principal Scientist at Interra Systems since November 2012. Prior to this assignment, he held a similar position at Broadcom Corporation with an emphasis on algorithmic and architectural issues pertaining to picture quality within a Set Top Box receiver. He began his professional career with an involvement in Systems Analysis and modeling of JPEG and H.261 standards at LSI Logic. He has over 10 years of video related experience in the consumer product industry. He holds a Bachelor and Master of Technology degrees from the Indian Institute of Technology-Bombay, and a Doctorate from the University Of Missouri-Columbia.
Advait M Mogre

Cloud-based Systems – Part 2

Three 30-minute Papers

Room: Salon 1

Chair: Don Craig (Arboretum Studios, USA)

The Cloud – an extensible virtual datacenter – offers the promise of scalability and robustness without all the physical datacenter headaches. Achieving that promise can be hard, with a number of widely publicized system outages in the last year, and the amount of work it takes to deploy in the cloud varies dramatically. These sessions examine current cloud offerings, and attempt to assess the practicality of cloud-based media workflows using Infrastructure, Platform, or Software as a Service offerings.

11:00 Building Real World Media in the Cloud
Andy Hurt (Front Porch Digital, USA); Brian Campanotti (Front Porch Digital, USA)
Attendees of “Building Real World Media in the Cloud” will gain a unique insight into the assembly of an integrated workflow that enables high resolution broadcast content to flow seamlessly through public, private, and hybrid cloud infrastructures. This session tackles head-on the issues of security, bandwidth, storage technology, and cost and will provide attendees with real-life examples of approaches where the Cloud is being successfully used for content protection, disaster recovery, and business continuance.
Presenter bio: Andy Hurt is the vice president of product management for Front Porch Digital. He has more than 12 years of experience leading product development, management, strategy, and operations in multiple global technology organizations. He served as senior director of product development and delivery at Level (3) Communications, as senior director of product solutions at First Data and general manager of product operations and finance at DISH Network. Hurt has an MBA in international management from the Fisher Graduate School of International Business at the Monterey Institute of International Studies and a bachelor’s degree in Spanish from the University of Kansas. He is certified as a New Product Development Professional from the Product Development and Management Association.
Andy Hurt
11:30 Look to the Cloud: Enabling Seamless Video in a Multi-Device World
Jeff Malkin (Encoding.com, USA)
Video content publishers need to maximize revenue opportunities by ensuring successful video playback across all consumer devices. As more and more end users consume content from tablets and smartphones, supporting mobile video workflow is now imperative. At the same time, the complexity of successfully delivering multi-platform mobile video is increasing. Preparing content for web and mobile consumption requires massive computing resources equipped for high-speed processing and the tools and know-how to ingest and transcode content into varying formats for rapid, error-fee and optimized delivery. This session will address how to prepare and host video, and demonstrate how companies can leverage cloud computing to deploy scalable and flexible video workflows to maximize video consumption across all platforms. Attendees will learn about different cost structures, mobile video monetization, editing and customization options, universal closed captioning support, and how to achieve workflows that include user-generated video sites, content management systems and mobile applications.
Presenter bio: Jeff Malkin, president of Encoding.com, has a proven track record in growing Internet and mobile startups, guiding Encoding.com — a Streaming Media 2012 ”Top 100 Companies That Matter in Online Video”— to its position as the world’s largest cloud-based media processing service provider with thousands of clients including many leading media and entertainment brands. Jeff was recently named a Streaming Media All Star. Prior to Encoding.com, Jeff founded Razz Inc. (venture funded – $12M), provider of audio and telephony-based applications. Its services have been distributed by leading wireless carriers worldwide. Prior to Razz, Jeff was CEO of FreeSamples.com, a research and analytics service provider serving top brands in the consumer package goods industry (venture funded – $17M). Jeff holds a Bachelor of Arts in Music from the University of Michigan.
Jeff Malkin
12:00 Virtualization of Television Playout
John Shike (Snell Inc, USA); Martin Holmes (Snell, USA)
Moving from videotape to file-based workflow, to cloud-based resources for broadcast production and playout will enable efficiencies in time, cost and manpower. Through the use of automation, IT-based hardware, client/server applications, and virtualization, it is possible to move various production capabilities, master control and playout into a completely remote private data center or cloud-based facility. The advantages, challenges and evolutionary approach to this disruptive technology is discussed.
Presenter bio: Martin Holmes is the Vice President of Technology at Snell. He is involved in the design, integration and implementation of customers’ digital facilities arising from the transition to file-based and I/P operations. In this role, he has been involved in the successful build and launch of the world’s largest digital broadcast facility, as well as many other pioneering projects. He brings a multi-disciplinary engineering approach with a strong understanding and technical expertise in systems design, project management, integration and control to large-scale systems, working closely with system integrators and other vendors to deliver connectivity and full integration of a Snell system. His areas of focus are: the automation of playout and master control environments, optimized broadcast workflow and the integration of mixed SDI and I/P based operations.
Martin Holmes

12:30 – 14:00

Boxed Lunch in Exhibit Hall

Room: Exhibit Hall

14:00 – 15:30

More, Better, Faster Pixels – Part 1

Three 30-minute Papers

Room: Salon 1

Chair: Peter H Putman (Kramer Electronics USA, USA)

Time waits for no one, and neither does imaging technology! 4K is here, and 8K is lurking in the shadows. The UHDTV acquisition-production-delivery pipeline is still under construction and there are many technical challenges to resolve along the way, such as standardization of time code and sync, real-time image processing at higher clock and frame rates, control of UHDTV hardware using IP interfaces, and signal transport of UHDTV over existing serial interfaces. Attendees will also hear about an innovative compact 120Hz 4K camera system, as well as a unique application of immersive UHDTV in an entertainment venue.

14:00 Beyond the Interface: Other work in SMPTE related to UHDTV
J. Patrick Waddell (Harmonic Inc., USA)
This paper will outline other developments within SMPTE, including the Study Group on UHDTV Ecosystems, the development of new Sync and Timecode standards, the development of standardized device control over IP, and others. UHDTV is going to be a disruptive technology, and users are likely to wish to upgrade as much of their workflow as possible at once. SMPTE is working to provide new standards and recommended practices which will help future-proof these upgraded facilities.
Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC’s TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC’s Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
J. Patrick Waddell
14:30 Creating an innovative broadcasting with technology and standardization
Tadaaki Yokoo (Association of Radio Industries and Businesses, Japan)
Radio systems in broadcasting and telecommunications, as seen in digital broadcasting and smartphones have been increasingly advanced. Today, they are essential for our economic activities and social life. The Association of Radio Industries and Businesses (ARIB) devotes itself, as a standards development organization to foster ICT society through contribution to radio systems. The paper describes ARIB’s role and activities to enrich and enlarge broadcasting services and industries by introducing its study, R&D and standardization on quality evaluation method, loudness operation, UHDTV systems, etc. Also, an issue on standards and essential industrial property rights is touched upon. (96 words) (note) ICT: information and communications technology
Presenter bio: Tadaaki Yokoo has been the executive director of Association of Radio Industries and Businesses (ARIB) since 2007. Before joining ARIB, he worked for NHK (Japan Broadcasting Corporation) for 33 years, where he has been involved in various engineering strategy and development work, including HDTV and digital TV. He was stationed in London, U.K., from 1993 to 1996 as the executive vice president of NHK Enterprises Europe Limited.
Tadaaki Yokoo
15:00 A design approach to creating scalable Beyond-4K video processing system on FPGAs
Benjamin Cope (Altera Corporation, United Kingdom)
An unquenchable end-user thirst for enhanced video quality results in an ever-scaling video frame size and frame rate requirements. As we move from to 4K to 8K and 120fps to 300fps, inevitably the computational complexity of video processing systems required to: consume, process and deliver video content increases. The need for solutions to support combinations the frame sizes and rates, as well as future increments, emphasizes the need for system scalability. The “computational complexity” and “scalability” requirements pose exciting challenges for FPGA implementation of video processing pipelines. This paper compares implementation techniques and methodologies to overcome these challenges. We specifically concentrate on architectures whereby the input, per pixel, video sample rate exceeds system clock rate. The novel result is to classify techniques and quantify results for future-proofing video processing solutions against an ever-scaling computational complexity requirement.
Presenter bio: Ben is the manager of the Video IP group at Altera with responsibility for connectivity and video processing IP cores. During five years at Altera he has co-authored patents and conference papers. Prior to joining Altera, Ben completed a PhD in video processing acceleration on reconfigurable logic and graphics processors at Imperial College London. He is a professional member of SMPTE and a member of the IEEE.
Benjamin Cope

Advancements in Image Processing – Part 4

Three 30-minute Papers

Room: Salon 2

Chair: Jim DeFilippis (TMS Consulting, USA)

We start with HEVC unlocking our future to an all progressive image processing world with 50% improvement in bit rate over H.264/AVC, then onto how GPU’s working with CPU’s will make short work of all sorts of image processing while the final paper of the session will ‘blow your mind’ with a radical new type of compression using contours instead of pixels. Quite a finish to our quartet of sessions on Advancements in Image Processing!

14:00 HEVC, the key to deliver enhanced viewing experience beyond HD
Sophie Percheron (ATEME & ATEME, France); Jerome Vieron (ATEME, France)
Broadcasters and operators’ objectives remain the same over time: reduce transport costs, reach more customers, and improve TV viewing experience. 10 years after the beginning of SD to HD transition, history is about to be repeated with the 50% bitrate saving committed by HEVC (High Efficiency Video Coding). Such compression gain will allow to improve viewing experience, bringing more information and emotions, though higher spatial and temporal resolution. 1080p60 already demonstrated user experience improvement over 1080i30, inspiring market precursors to undertake initiatives in producing 1080p60 channels. However the required bandwidth remains the bottleneck to an end-to-end deployment. This paper will demonstrate why HEVC is the key to unlock the “progressive only” broadcast chain deployment; and why it will not lead to “interlacing” end of life. Finally we will discuss future (premium) services brought by “beyond HD” viewing experience: live events, sports, concerts, but also new immersive experiences offered by UHDTV.
Presenter bio: Sophie Percheron is Product Marketing Manager for Live Distribution at ATEME since October 2012. Sophie is responsible to build the most comprehensive solutions to deliver linear channels over Cable, IPTV, Satellite, Terrestrial as well as Over-The-Top networks, enhancing TV experience with added value services. In her role, Sophie drives the Live Distribution solution growth and related product roadmap definition to address today’s and tomorrow’s challenges in the Broadcast & Broadband Live distribution markets. Before joining ATEME, Sophie worked at Bouygues Telecom and Microsoft where she held successively Product Marketing Manager and Senior Program Manager positions. Sophie holds a Master Degree in Marketing from Paris Dauphine.
Presenter bio: Jérôme Viéron received the Ph.D. degree in Signal processing and telecommunication from Rennes 1 university in 1999. He joined Thomson R&D France as Research Engineer working on Advanced Video Coding. He is an active contributor to standardization efforts led by ISO and ITU-T groups. He was very active to the H264/MPEG-4 SVC standardization process. In 2007, he joined the Video Processing and Perception Lab of Technicolor R&I as Senior Scientist and explore new technologies for future video coding applications and Standards. He joined ATEME in 2011, as Advanced Research Manager. He is in charge of French and European Research programs and works on new generation video coding technologies. He is an active contributor to the standardization process of HEVC (High Efficiency Video Coding) the future video coding standard, and is implied in the 4EVER (for Enhanced Video ExpeRience) consortium which aims at researching, developing and promoting an enhanced Television Experience.
Sophie PercheronJerome Vieron
14:30 Enhanced Image Processing Beyond Baseband: CPU/GPU Processing Model Unlocks Performance Possibilities
Kirk Marple (RadiantGrid Technologies, LLC, USA); Ernie Sanchez (Cinnafilm, USA)
Leveraging commodity, enterprise computing technology, emerging file-based Image Enhancement solutions focused on frame rate conversions, de-noising, resolution-scaling, and other quality focused image enhancement techniques are exceeding the speed and quality standards traditionally set by baseband approaches. This paper will examine how the combination of continually increasing CPU power with increasingly sophisticated GPU-based algorithms enables commodity-based models to provide ever-greater performance and output quality, enable simple/fast realization of new processing capabilities, and provides significant operational and cost benefits found only in a highly automated and massively parallelized grid-based processing workflows.
Presenter bio: Kirk Marple is chief software architect of RadiantGrid, a Wohler brand. Marple is responsible for the RadiantGrid platform vision, strategic business development, and management of product development. A 25-year veteran of the software industry, he founded Radiant Grid Technologies and served as its president and chief software architect before its acquisition by Wohler in 2012. Previously, Marple co-founded Agnostic Media and served as its chief software architect for over seven years. He also spent six years at Microsoft, where he was chief architect and development manager for the Microsoft Virtual Worlds research platform, an object-oriented software platform for developing 3D virtual environments. Also at Microsoft, Marple was development manager for WindowsMedia.com and development lead for the Blackbird project multimedia publishing system. He holds five U.S. patents for his work in object-oriented virtual environment platforms. He holds a master’s degree in computer science from the University of British Columbia.
Presenter bio: In the role of COO, Ernie is responsible for sales, marketing and support of Cinnafilm software in addition to all internal operational groups within Cinnafilm. With a long standing career in technology, Ernie brings over 7 years of technology project management and 11 years of enterprise IT infrastructure experience to the file-based media and entertainment market vertical Cinnafilm serves. Beginning his technology career with US West, Ernie quickly garnered national notoriety for expertise in transmission issues affecting call quality and was the youngest person in US West history to be given TEC-55 status, a significant achievement for a young engineer. Ernie also served on IEEE subcommittees for lightning and high voltage protection of telecommunications for mobile & land-based networks. Ernie has nearly 3 decades of experience with studio and stage audio technologies and has worked as a composer, arranger, producer and musician across many music genres, scoring multiple independent films. Ernie graduated from New Mexico State University with a degree in Electrical Engineering and is a member of the Institute of Electrical and Electronics Engineers.
Kirk MarpleErnie Sanchez
15:00 Taking the pixel out of the picture
Phil Willis (University of Bath & Centre for Digital Entertainment, United Kingdom)
We have developed a contour-based image amd movie representation which takes the pixel out of the picture. We use contours, which are scale-free and can readily be rendered back to an image at a new resolution independent of the original. They can act as a universal intermediate during post-production. They provide a single delivery mechanism, whether to mobile phone, TV screen or cinema. They are future-proof, taking HD resolution and beyond in their stride. They scale gently with input resolution, avoiding square law storage and bandwidth requirements. This is not a disruptive technology: moving between pixels and contours can happen at any stage in the pipeline. Nor does it need special cameras or displays. http://www.cs.bath.ac.uk/vsv/ We will give an update on our current state of play, with video examples. We will explain how we retain high quality and show the results of recent demanding professional quality tests.
Presenter bio: Phil Willis’s research interests are in colour raster graphics, computer games, virtual reality and animation and film technologies. He has an underlying interest in picture and object representations, especially the balance between discrete and continuous representations. He is Professor of Computing, Director of the national EPSRC Doctoral Training Centre for Digital Entertainment and founded the Media Technology Research Centre at the University of Bath. He led Bath’s Department of Mathematical Sciences from 1997-2000 and the Department of Computer Science from 2007-10 and was reappointed for 2010-2013. He is a Fellow and past Chair of the Eurographics Association and became the first Eurographics-ACM SIGGRAPH joint member. He was a founding member of the UK Computing Research Committee.
Phil Willis

15:30 – 16:00

Break

Rooms: Salon 1, Salon 2

16:00 – 17:30

More, Better, Faster Pixels – Part 2

Three 30-minute Papers

Room: Salon 1

Chair: Peter H Putman (Kramer Electronics USA, USA)

Time waits for no one, and neither does imaging technology! 4K is here, and 8K is lurking in the shadows. The UHDTV acquisition-production-delivery pipeline is still under construction and there are many technical challenges to resolve along the way, such as standardization of time code and sync, real-time image processing at higher clock and frame rates, control of UHDTV hardware using IP interfaces, and signal transport of UHDTV over existing serial interfaces. Attendees will also hear about an innovative compact 120Hz 4K camera system, as well as a unique application of immersive UHDTV in an entertainment venue.

16:00 Development of a multi-link 10-Gbit/s mapping method and interface device for 120-fps UHDTV signals
Takuji Soeno (NHK, Japan); Yukihiro Nishida (Japan Broadcasting Corporation, Japan); Takayuki Yamashita (Japan Broadcasting Corporation & Science and Technology Research Laboratories, Japan); Yuichi Kusakabe (NHK, Japan); Ryohei Funatsu (NHK (Japan Broadcasting Corporation), Japan); Tomohiro Nakamura (NHK, Japan)
We have devised a new mapping method to transmit various ultra-high definition television (UHDTV) signals including a 120-frame-per-second (fps) signal into multi-link 10-Gbit/s streams, and developed an interface prototype for connecting UHDTV video devices. The video parameter values of UHDTV systems including 120-fps signals are specified in Recommendation ITU-R BT.2020. The UHDTV interface has already been standardized as SMPTE ST.2036, but it does not yet support the 120-fps signals because it was originally built on the basis of the high-definition serial digital interface that can handle frame frequencies of up to 60 Hz. For the interface device to be compact and less low power consuming, we implemented the interface prototype using a parallel-fiber-optics transceiver with a capacity of 10 Gbit/s per channel and verified the practicality and feasibility of the multi-link 10-Gbit/s mapping method and the interface prototype.
Presenter bio: Takuji Soeno is currently an engineer of NHK (Japan Broadcasting Corporation) Science and Technology Research Laboratories. He received B. E. degree in 2002 and M. E. degree in 2004 from Keio University. In 2004, he joined NHK and built his career as a video engineer through TV program production. Since 2010, he has been in charge of the research and development of ultra-high definition television systems, particularly camera, image sensor and digital signal processing. He is a member of the Institute of Image Information and Television Engineers of Japan (ITE).
Takuji Soeno
16:30 Compact 120-fps Super Hi-Vision (8K) camera with 35-mm PL mount lens
Hiroshi Shimamoto (NHK & Japan Broadcasting Corporation, Japan); Toshio Yasue (NHK, Japan); Kazuya Kitamura (NHK, Japan); Toshihisa Watabe (NHK, Japan); Norifumi Egami (Kinki University, Japan); Shoji Kawahito (Research Institute of Electronics, Japan); Tomohiko Kosugi (Brookman Technology, Inc., Japan); Takashi Watanabe (Brookman Technology, Inc., Japan); Taku Tsukamoto (ASTRODESIGN, Inc., Japan)
NHK has been researching and developing Super Hi-Vision, 8K version of UHDTV, for future broadcasting. Last year we developed a CMOS image sensor with 120 fps, 33 megapixels, and 12-bit ADC in conformance to the highest resolution and frame frequency specified in Recommendation ITU-R BT.2020. We also developed a 120-fps three-chip color camera equipped with this image sensor. We recently developed a 120-fps 33-megapixel single-chip color CMOS image sensor and a compact 8K camera head equipped with this image sensor – body size; 125 mm (width) x 125 mm (height) x 150 mm (depth) and weight; 2kg, making it dramatically smaller and lighter than former 8K cameras. The optical size of the image sensor is 25 mm and the camera head is compatible to super 35-mm size PL mount lenses for various cameras for digital cinema.
Presenter bio: Hiroshi Shimamoto received the B.E. degree in electronic engineering from Chiba University, M.E. and Ph.D degrees in information processing from Tokyo Institute of Technology in 1989, 1991 and 2008, respectively. In 1991, he joined NHK (Japan Broadcasting Corporation). Since 1993, he has been working on research and development of UHDTV (ultrahigh-definition TV) cameras and 120-fps 8K image sensors at the NHK Science & Technology Research Laboratories. In 2005-2006, He was a visiting scholar at Stanford University. He is a member of the IEEE.
Hiroshi Shimamoto
17:00 Creative Application of Digital Media and Technology in the Themed Entertainment Industry
Rick Rothschild (FAR Out! Creative & Themed Entertainment Association, USA)
To explore how emerging digital media technology is influencing the themed entertainment industry, specifically using FlyOver Canada, a newly opened (June 2013) attraction in Vancouver, BC Canada, as the focus. This attraction is a “flying ride” 180º dome experience employing 4K 60 fps digital capture and playback with specially designed and manufactured spherical projection lens as well as a unique 14:1 audio system. The inspiration of this new attraction was Disney’s Soarin’ Over California, which only 12 years ago was at the edge of display technology having employed 48 fps IMAX film for both capture and playback. Having been the creative director for both attractions, my presentation will provide unique insight into how times (and media formats/technology) are quickly changing and influencing the global themed entertainment world.
Presenter bio: Blending a unique set of skills developed over more than 40 years in the world of theater, Disney theme parks, media and museums, Rick brings a deep technical knowledge together with a strong creative perspective that enables him to integrate complex ideas, systems and story, deliver them effectively and efficiently as a complete entertainment product. Rick served as Creative Director for FlyOver Canada, a new “flying ride” attraction that opened in Vancouver BC in June 2013. Highlights of his Disney career include serving as creative director for Soarin’ Over California. FindingNemo Submarine Ride, Captain EO and the American Adventure as well as providing technical show integration and direction of the new Star Tours attraction. Rick is the immediate past President of the Themed Entertainment Association, still serving on its International Board. In the last 20 years, 6 projects in which he was involved have been recognized with the THEA Award for Outstanding Achievement.
Rick Rothschild

Cinematography

Three 30-minute Papers

Room: Salon 2

Chair: Paul Chapman (Foto-Kem Industries Inc., USA)
16:00 Lightfield Acquisition and Processing System for Film Productions
Siegfried Foessel (Fraunhofer IIS, Germany); Frederik Zilly (Fraunhofer IIS, Germany); Michael Schöberl (Fraunhofer IIS, Germany); Peter Schäfer (Fraunhofer IIS, Germany); Matthias Ziegler (Fraunhofer IIS, Germany); Joachim Keinert (Fraunhofer IIS, Germany)
With traditional film cameras, important creative parameters such as the depth of field and the viewpoint are burned into the footage after acquisition without the possibility to change them in post-production. In consequence, special attention is required on set to frame the scene and to pull the focus. Against this background, we propose a lightfield capturing and processing system, suitable to be used on set, and allowing to change the camera viewpoint and focal plane in post-production. Our approach involves a synchronized array of compact high definition cameras to capture multiple viewpoints enabling a high image quality of the resulting footage. Based on computational imaging methods virtual camera positions and synthetic apertures will be created afterwards. First scenes captured with such an array will be demonstrated.
Presenter bio: Siegfried Foessel received his Diploma degree in Electronic Engineering in 1989. He started his professional career as a scientist at the Fraunhofer Institute for Integrated Circuits IIS in Erlangen and was project manager for projects in process automation, image processing systems and digital camera design. In 2000 he received his Ph.D. degree on image distribution technologies for multiprocessor systems. Since 2001 he focusses on projects for digital cinema and media technologies. He was responsible for projects like ARRI D21, DCI certification test plan or JPEG2000 standardisation for Digital Cinema. Siegfried is member of various standardisation bodies and organisations like SMPTE. In ISO SC29/JPEG he is chairing the systems group. Within the EDCF he is member of the technical board. Since 2010 Siegfried is head of the department Moving Picture Technologies, spokesman of the Fraunhofer alliance Digital Cinema and vice president of the FKTG, the german equivalent to SMPTE.
Siegfried Foessel
16:30 Lens Considerations for Digital Cinematography
Laurence J Thorpe (Canon USA Inc, USA)
Directors of photography speak of personalities of cine lenses and wide discussion surrounds associated “looks”. The cine lens is bound up in complex technologies and multi-dimensional aspects of optical images projected onto camera image sensors are inextricably tied to the many variables inherent to a given design. This paper will review the primary considerations in the relatively new Canon Cinema EOS lens designs for contemporary digital cine cameras. It will further attempt to describe the contributions of their separate optical performance parameters to the personality of these lenses in the context of the look of the imagery that they produce.
Presenter bio: L. Thorpe is Senior Fellow, Professional Engineering & Solutions within the Imaging Technologies & Communications Group of Canon USA Inc. Mr. Thorpe was with the Sony Broadcast Company from 1983 to 2003. Mr. Thorpe worked for RCA’s Broadcast Division from 1966 to 1982, where he developed a range of color television cameras and telecine products. From 1961 to 1966, Mr. Thorpe worked in the Designs Dept. of the BBC in London, England, where he participated in the development of a range of color television studio products. Mr. Thorpe is an IEE Graduate (1961) of the College of Technology in Dublin, Ireland and received his Chartered Engineer (C. Eng.) and MIEE distinction in 1961 from the Institute of Electrical Engineers in London, England.
Laurence J Thorpe
17:00 High-Accuracy Digital Camera Color Transforms for Wide Gamut Workflows
Jon McElvain (Dolby Laboratories, USA); Walter Gish (Dolby Laboratories, USA)
For digital camera systems, transforming from the native camera RGB signals into an intermediate working space is often required, with common examples involving transformations into ACES or XYZ. For scene-linear camera signals, by far the most common approach utilizes 3×3 matrices (formed using regression methods), which are low-complexity approximations to the exact transformation that would be obtained using a full spectral analysis. For workflows designed for Rec709 displays, matrix-based input transforms are capable of producing reasonable accuracy in this domain. However, the 3×3 matrix colorimetric errors can become significant for saturated colors in workflows involving wide-gamut primary systems such as UHDTV or ACES. To address this shortfall, a novel input color transformation method has been developed that involves separate one-dimensional and two-dimensional operations. From the native camera RGB signals, chromaticity-like coordinates are computed and these are used to index into a two-dimensional lookup table (LUT); the output of the two-dimensional LUT is then scaled according to the input signal. Because the surfaces associated with the 2D LUTs possess many degrees of freedom, highly accurate colorimetric transformations can be achieved. For several cinematic and broadcast cameras tested, this new transformation method consistently shows a modest reduction of mean deltaE errors for colors within the Rec709 primaries. The improvement in accuracy becomes much more significant for saturated colors, for which the mean deltaE errors are reduced by more than a factor of three for colors that lie between Rec709 and Rec2020.
Presenter bio: Jon McElvain is currently with the Advanced Technology Group of Dolby Laboratories, and is based in Burbank, CA. He specializes in camera and display systems, image quality quantification, and image processing algorithm development. His previous positions were with Digital Imaging Systems GmbH, Micron Inc. (imaging division, now Aptina LLC) and with Xerox Corporation. He has over 20 publications and holds 25 patents, with another 11 patents pending. He is a member of SMPTE and IS&T, and previously served as chairman of the IEEE Standing Committee on Industry Signal Processing Technology (IDSP). He received a B.S. in physics from the University of California, San Diego, an M.A. in physics from the University of California, Berkeley, and a Ph.D. in physics from the University of California, Santa Barbara.
Jon McElvain

19:00 – 22:00

Honors & Awards Ceremony and Dinner

Room: Hollywood Ballroom

22:00 – 23:59

Afterparty and SMPTE Jam

Room: Hollywood Ballroom
Advertisements

October 22, 2013 - Posted by | BUSINESS, ENTREPRENEURS, FILM, TECHNOLOGY, Uncategorized, We Recommend | , , , , , , , , , , , ,

36 Comments »

  1. … [Trackback]

    […] Find More Informations here: luckygirlmedia.wordpress.com/2013/10/22/technology-smpte-2013-full-program/ […]

    Trackback by Homepage | October 23, 2013 | Reply

  2. What i do not understand is in reality how you are not really liked
    a lot more than you might be now.
    You are so intelligent. You realize significantly when
    it comes to this topic, produced from a lot of varied angles. Your individual stuff’s outstanding.
    At all times care for it!

    Comment by Anonymous | October 23, 2013 | Reply

  3. great site 2 read i learn alot.

    Comment by Buy Hawaii Art | November 3, 2013 | Reply

  4. very good web. Good info congrats. This is one the best Aricles and advice I have read this year to get the best info

    Comment by Isaac Meenan | November 9, 2013 | Reply

  5. Hi, I do think this is an excellent site. I stumbledupon it 😉
    I am going to come back once again since I bookmarked it.
    Money and freedom is the best way to change, may you be
    rich and continue to help others.

    Comment by Anonymous | February 1, 2014 | Reply

  6. I pay a quick visit each day some blogs and blogs to read articles, but this
    website offers quality based posts.

    Comment by Anonymous | February 10, 2014 | Reply

  7. Hi there to every one, it’s really a pleasant for me to pay a
    visit this web site, it consists of precious Information.

    Comment by Rob | February 14, 2014 | Reply

  8. Helpful info. Lucky me I discovered your web site by accident, and I am stunned why this
    twist of fate didn’t happened earlier! I bookmarked it.

    Comment by science-forum.com | February 16, 2014 | Reply

  9. Heya i’m for the first time here. I came across this board and I find It
    truly useful & it helped me out much. I hope to give something back
    and aid others like you helped me.

    Comment by spring dresses | February 20, 2014 | Reply

  10. Good day! Would you mind if I share your blog with my facebook group?
    There’s a lot of folks that I think would really enjoy your content.
    Please let me know. Thanks

    Comment by Women | March 11, 2014 | Reply

  11. … [Trackback]

    […] Informations on that Topic: luckygirlmedia.wordpress.com/2013/10/22/technology-smpte-2013-full-program/ […]

    Trackback by enligne | March 14, 2014 | Reply

  12. This is very attention-grabbing, You are an excessively professional blogger.
    I have joined your feed and look forward to in the hunt for more
    of your excellent post. Additionally, I’ve shared your site in my social networks

    Comment by panologic.org | March 29, 2014 | Reply

  13. tile racer

    Your Technology SMPTE 2013 Full Program LuckyGirl MEDIA … is very good!

    Trackback by tile racer | April 3, 2014 | Reply

  14. Wonderful items frօm you, man. ӏ’ve Һave in mind yoսr stuff аnd
    you aгe simply tоо great. I really lіke what
    уou’ve acquired right here, really like wҺat yοu are stating аոd the ѡay in which in whiϲh yoս say іt.
    You make it entertaining аnd you stіll tаke care to stay wise.
    І caո’t wait to read much moгe from you.
    This іs reɑlly ɑ gгeat website.

    Comment by premium | April 8, 2014 | Reply

  15. You’ve made some decent points there. I checked on the net for additional information about
    the issue and found most individuals will go along with your views
    on this site.

    Comment by nicekeystone | April 18, 2014 | Reply

  16. Your style is really unique compared to other folks I have read
    stuff from. Thank you for posting when you have the opportunity, Guess I will just book mark this web site.

    Comment by accessory | April 19, 2014 | Reply

  17. Spot on with this write-up, I actually believe that this site needs much more attention.
    I’ll probably be returning to read more, thanks for the info!

    Comment by air | April 21, 2014 | Reply

  18. I just couldn’t depart your site before suggesting that I actually enjoyed the usual info a person supply in your guests?
    Is going to be back often to inspect new posts

    Comment by m | April 21, 2014 | Reply

  19. Very descriptive blog, I enjoyed that a lot.
    Will there be a part 2?

    Comment by UK | April 29, 2014 | Reply

  20. I enjoy looking through an article that will make people think.
    Also, thank you for permitting me to comment!

    Comment by cher | May 3, 2014 | Reply

  21. I know this web site presents quality based articles or reviews and extra
    stuff, is there any other web page which gives such information in quality?

    Comment by max | May 3, 2014 | Reply

  22. I like what you guys are usually up too. This sort of clever work
    and reporting! Keep up the fantastic works guys I’ve added you guys to my blogroll.

    Comment by baratas | May 4, 2014 | Reply

  23. whoah this blog is wonderful i like studying your posts. Stay
    up the great work! You understand, a lot of individuals are looking around for this information, you can help them greatly.

    Comment by cheap football accessories | May 17, 2014 | Reply

  24. I like the valuable information you provide for your articles.
    I will bookmark your blog and check again right here frequently.
    I’m reasonably certain I will learn many new stuff right here!
    Best of luck for the next!

    Comment by francais | May 23, 2014 | Reply

  25. Hello, I enjoy reading all of your article post. I wanted to write a little comment to support you.

    Comment by defiscalisation loi duflot 2014 | June 24, 2014 | Reply

  26. Hmm it looks like your blog ate my first comment (it was super long)
    so I guess I’ll just sum it up what I had written and
    say, I’m thoroughly enjoying your blog. I too am an aspiring blog blogger
    but I’m still new to everything. Do you have
    any points for first-time blog writers? I’d certainly appreciate it.

    Comment by rencontre cougar 100 gratuite | June 26, 2014 | Reply

  27. Have you ever considered writing an ebook or
    guest authoring on other blogs? I have a blog centered on the same information you
    discuss and would really like to have you share some stories/information. I know my subscribers would enjoy your work.
    If you’re even remotely interested, feel free to shoot me an email.

    Comment by discret | July 4, 2014 | Reply

  28. This is really interesting, You’re a very skilled blogger.
    I have joined your feed and look forward to seeking more of your excellent post.
    Also, I have shared your web site in my social networks!

    Comment by Air Max Command | July 20, 2014 | Reply

  29. You actually make it seem so easy with your
    presentation but I find this matter to be really something which I think I would never understand.
    It seems too complex and very broad for me. I am looking forward for your next post, I’ll try to get the
    hang of it!

    Comment by Max | July 23, 2014 | Reply

  30. What’s up, every time i used to check webpage
    posts here in the early hours in the break of day, as i love
    to find out more and more.

    Comment by Monclair | September 20, 2014 | Reply

  31. Thank you for another great post. The place else could
    anyone get that kind of info in such an ideal means of writing?
    I’ve a presentation subsequent week, and I’m at the look for such information.

    Comment by Anonymous | September 21, 2014 | Reply

  32. Hi! I’m at work browsing your blog from my new iphone! Just wanted to say I love reading through your blog and look forward to all your posts!
    Keep up the superb work!

    Comment by Goose | September 27, 2014 | Reply

  33. I am extremely impressed with your writing skills and also with the layout
    on your blog. Is this a paid theme or did you modify it yourself?
    Anyway keep up the excellent quality writing, it’s rare to see a nice blog like this one
    today.

    Comment by EspañA | September 29, 2014 | Reply

  34. Appreciate the recommendation. Let me try it out.

    Comment by Canada | September 29, 2014 | Reply

  35. Hi, this weekend is good designed for me, as this point in time i am reading this impressive
    informative article here at my home.

    Comment by Bolsos | September 30, 2014 | Reply

  36. Pretty! This has been a really wonderful post.
    Many thanks for providing these details.

    Comment by Longchamp Bolsos | October 7, 2014 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: