Except
maybe during the world cup's four-week duration, once every four years.
This opinion piece was first published here on TVB Europe.
Zoom
forward to the beginning of 2023, and Ultra Low Latency (ULL) is considered a
must-have by all and sundry. What has changed?
Well, one thing is five solid years of aggressive vendor marketing with ULL solutions.
This
ongoing onslaught reminded me of over 15 years ago when Microsoft was still in
the game. Vast bouquets of linear TV channels were still critical to the TV
business, and getting to the desired channel could be challenging for viewers. The
firm had invented a fast-channel change solution that enabled zapping from
channel to channel well under a second. Microsoft's marketing machine convinced
the industry that it was a vital need.
Rewind to when
remote controls first appeared. It was an analogue world. As long as radio
reception was stable, channel-change times were instantaneous. When digital TV
appeared at the end of the last century, latency crept in between pushing the
P+ button and the channel changing on the set. This delay was due to the need
to decode the new channel's compressed digital stream. After some early hits
& misses – I remember one of the world's first IPTV deployments in the
early noughties with an 8-second zapping time - MPEG2's average two-second
GOP time created the standard for this. For over a decade, when you pressed P+
or P-, something would happen within about 2 seconds, and that was fine. Sure, it's
always nice when things happen even faster. Users noticed it and perhaps would
have churned away from services with 8s zapping time, but nobody cared about
the difference between one second and two seconds.
What we
wrongly call latency today – in reality, it is delay - also has a gold
standard, set by decades of broadcast at around five seconds. So, for
the bulk of use cases, i.e. watching live TV, when you are close to those five
seconds, reducing is nice, but, say, even ten seconds will not be a deal
breaker for most viewers. Sports fans have long known they can get a score a
few seconds earlier on the radio, and we've always lived with that. When
someone on the ground tweets a live score, that too has a few seconds of delay,
but it seems real-time enough to the community bounded by the 5s delay of the
main live video feed.
There's
also an elephant in the low latency room … have you guessed? ... sustainability.
Lower latency means more and faster caches. Client-side video buffering may be
considered undesirable from a user perspective. Still, it has proven to be the
best way to ensure robust delivery without deploying significant resources in
the network. That was ABR's secret sauce that enabled the internet's video
revolution in the first place. As with all aspects of our lives that generate
carbon emissions, we must also ask ourselves what is good enough.
Of course, some exceptional video use cases will require ULL, such as betting or professional uses like … yes, you've heard it before … telemedicine. Still, we're talking of live TV for the most part, and here, it's only once every four years that the world cup is broadcast simultaneously over different platforms, and people care about those few seconds. The rights of most other major sports events that attract substantial live viewership are owned by a single broadcaster. The use case when part of the family is watching a DTT broadcast while others are streaming the same content in the same house is too marginal to consider. 30+ seconds of delay is becoming untenable for live sports, but mainstream video consumption doesn't have to go lower once we're in the single-digit delays. I doubt users would flock from 7-second delay services to a 5-second delay one. Note that this opinion piece is just about the video use case; in other situations, ULL will enable new services that would otherwise have been impossible; just ask any 5G equipment vendor ;o)
The three stereo recordings below are of Mozart’s piano quartet K478 (1st movement). All video is identical. The first two have sound images dynamically adapted to the video image. The first has a moderate adaptation, and the second has more significant changes to the audio field so that you hear instruments in the same part of the audio field as the video. The third recording is a regular fixed stereo sound provided for reference. Please wear headphones or listen with real stereo speakers well positioned to the screen’s side. We suggest listening in the following order:
UPDATE (22/03/2021): 45-second demo with three stereo images is here.
Full-length video with moderately adaptative audio is here (extract here).
Full-length video with strongly adaptative audio is here (extract here)
Full-length video with regular fixed stereo audio is here (extract here)
To take this project further, one option could be to produce a live concert (not necessarily classical music) with a similar low-tech approach. Another would be to embrace more sophisticated technology such as NextGen Audio (NGA) like MPEG-H, Dolby Atmos, or DTS:X and film in 8K. We could create multiple versions in a third option, perhaps using some AI or specific metadata to create a personalised edit and mix.
Another area to explore would be using track delay to create a sense of spatialisation within the stereo mix.
A bit more detail on the genesis of this project:
Working on UHD technologies, especially since I joined the Ultra HD Forum over five years ago, I have often been excited, my eyes and ears blown away by impressive demos of the latest and best audio and video technologies. Vendors like DTS, Fraunhofer, Harmonic, Sony, Samsung, Dolby, and others are good at making such demos. But I’m a geek, and it’s the technology that vendor demos show off to which I respond. New technologies serve no purpose; they should help filmmakers and other artists capture emotions and tell stories better. It is noteworthy that Dolby has been energetically pushing ATMOS with some renowned musicians.
When I have the opportunity, I often present new UHD and audio technologies, including 8K and HDR to friends in the movie business in France. It was, at first, a mystery to me why they often wouldn’t engage. I have identified at least one reason for this resistance. Many filmmakers had their fingers burnt in the transition from analogue to digital as the first generations of technology made it harder to capture their artistic intent. I was rarely able to convey that almost all newer technologies can also mimic the older ones once in the digital domain. So, if you want the granularity of 2K instead of 4K, more limited colour space, or a reduced dynamic range, that’s all easy to achieve in postproduction.
This technology vs art issue has been obsessing me for a few years. When presenting UHD to many artists, I felt as if I were someone in a world with only two primary colours, trying to explain to artists what they might gain by adopting the third one.
It is essential to avoid the trap that any shiny new technology sets. When, for example, digital music production came onto the scene as early as the eighties, many became obsessed with sequencers, MIDI, and heavily overused repetitive rhythm boxes.
The COVID-induced sanitary situation woke me from my slumbering thoughts. So many of us miss the part of “being there”. What if there were a way to use today’s readily available technologies better to tell the story of a concert or maybe a play?
So, the video we made is the story of a concert delivered using only HD resolution and regular stereo sound. It was a few weeks of planning, a day’s shooting, and a few weeks of mixing and post-producing.
The three iPhone cameras are on tripods and all within a small space. So, you get to experience the concert as if you were there. We filmed in 4K and produced in HD, so all camera movements are post-produced.
A professional sound engineer did all the sound mixing (see credits). We adapted all the stereo sound images to the camera angles. We didn’t use any new next-generation audio technology, just plain vanilla stereo. To keep it low-tech and straightforward, the only effect used in postproduction was panning, i.e., controlling each audio input's left/right mix. So, when an instrument is on one side of the image, you hear it from that side. We refrained from adapting the volume on any track or changing any equalisers (there is only a simple one to enhance the cello).
Using so many panning breaks with current mixing rules because we lose much of the field depth. The first version in the URLs listed above, where audio adaptation is moderate, is the best compromise, and I am ready for the sound mixing community’s wrath. Early stereo recordings often used extreme panning. Today, there are still recordings with one instrument coming mainly from one of the two stereo channels, but these are rare, and the stereo field remains fixed during the recording. In our eight ½-minute video, nine different sound fields are used about 90 times.
If you find this demo convincing, imagine how much more advanced technology like NGA and 8K could achieve.
You will feel immersion if you wear earphones or listen on a sound system with clearly separated left and right channels (either side of your TV or video screen).
The musical director — and not me — had the final say on all cuts. Artistic intent is our unique driver, not showing off the technology. We humbly hope this is one of the ways Mozart would have liked you to experience his music. We have made artistic choices and understand that others may want to experience this work of art differently. Perhaps we’ll go for a piece with fewer intricacies leading to fewer interpretation choices in a future recording.
We are looking forward to your feedback.
Musical credits
Mozart Piano Quartet K478, first movement
Piano Ionel Streba
Violon Bertrand Aimar
Viola Kyoko Yamada
Cello Hélène Billard
Audio and video credits
Produced and shot at Hôpital Rothschild Auditorium by Ben Schwarz
This blog has been brewing for several years. As I’m locked down in rural Essex (UK), the global pandemic brings a sense of urgency to my dilemma. The whole world had never just stopped before in our lifetimes. As I write, I’m fortunate not to have been personally affected by the tragedy of the loss of life, for which I am very grateful.
As many have noted, the global situation is also an opportunity for us to question the very structure of our societies. As candidly as possible. Clean air, unpolluted skies, and strangers looking after each other have been the few positive features of this pandemic. But above all, it has become apparent that we could afford to pay to save the planet if we chose to. Some things have turned out to be more important than GDP and short-term economic growth.
Many of our mainstream politicians are clamouring for us to return to where we were as quickly as possible. But society as a whole might no longer want that.
What are the personal questions for those of us who love technology and whose chosen career is to promote it? Is now the right time to reposition ourselves and reconcile our aspirations with our actions? Let’s call this the Technology Enthusiast’s Environmental Dilemma or TEE-dilemma.
In this personal post, I’ll attempt to address the issue of making peace with my bewildered self in the different universes I inhabit and, hopefully, find some commonality between them. I’m a geek technology consultant and video, innovation and Blockchain writer. I’m also an active member of the French green party. I want to understand better how I might be able to reconcile the following:
Helping to invent disruptive new digital services that we hope people will find useful
Assisting vendors and operators in deploying more and more powerful networks to distribute content ever faster
Promoting bigger, brighter, better video and TV technology
Evangelising on the subject of often energy-hungry blockchain technology
Keeping buying the latest i-gadgets for myself
Campaigning for the green party
I became a geek in my early twenties. I’m now 55, so I’ve fretted about Moore’s law for over 30 years. That’s the one that said computing power doubles about every 18 months; it remained valid for decades. None of my computers ever reached the age of three for all that time. For over twenty years, I’ve also kept my mobile phone up to date, rarely letting it get more than a year old. I’ve provided strategic advice and consulting to startups and multi-billion-dollar operators alike. I’ve written dozens of business and technology white papers and hundreds of blogs on better ways of delivering video, and more recently have written about Blockchain.
Privately, I’ve also been a green political activist for 30 years.
I’m finding it harder and harder to make all this square up.
My parents were progressive, and although they were not very interested in technology, I grew up thinking that high tech constitutes Progress (with a capital P).
OK, so what constitutes technological Progress?
Wikipedia says Progress is the movement towards a refined, improved, or desired state.
In the early 1980s, when I was a teenager, my parents were already active in the green movement. Before the paper had an environmental correspondent, my father covered environmental issues in The Guardian. I remember a chat with the ecologist Teddy Goldsmith when I questioned him on what I perceived to be the Green’s scorn for “Progress”. I was travelling in the boot of our station wagon, and he was in the rear seat — someone even more important must have been in front. He turned around, looked at me kindly, and said, “You know, Ben, as ecologists, we don’t believe in returning to the treetops. We embrace progress!”. Of course, we then spent the rest of the journey arguing what Progress was, measuring happiness rather than growth etc.
That moment has stayed with me ever since. When does Progress have a capital P, and when is it just ephemeral trivia? Any technology can indeed achieve both and everything in between.
The effectiveness or impact of something new is not necessarily correlated to its usefulness — social, environmental or otherwise.
If you introduce the hammer to a society that doesn’t have one, its people will likely learn to build better houses. At the same time, and probably quickly, someone will also use one to crush another person’s skull more effectively. Progress is neither good nor bad per se; it’s what we do with it.
An example of a useful disruptive innovation?
Like many innovations, mobile text messaging or SMS came about by accident.
WhatsApp is an example of the continuation of that disruption in personal communications. The profound change came from opening up a new communication channel that was real-time (like a phone call you receive at the same time it is made) and asynchronous (like email, that you answer at your leisure). People could communicate in numerous ways that they couldn’t access before. Proof of the pudding is visible in how even tech-phobic people managed to use SMS messaging before the invention of the T9 keyboard. Remember when you needed to hit the ‘2’ key three times to get a ‘C’? Who recalls under which number represented the space bar? Yet text messaging swept the world because it gave us something we didn’t have before and answered a need to communicate asynchronously in real time. We want the receiver to have the information at once when we send messages. When we receive messages, we want to control when we acknowledge or answer an incoming message. SMS leverages the benefits of real-time communications without enslaving us to their demands. Surely that’s the kind of Useful Progress Teddy Goldsmith meant?
Here my TEE dilemma is exacerbated by the observation that an element of Reaganomics-style “the market always knows best” helps determine what constitutes Progress. Until the market spoke, the paragraph above could only be conjecture.
Trade-offs in the Age of Greta Thunberg
The SMS example showed that people are prepared to learn a ‘barbaric’ user interface when they gain something they consider worth that amount of effort.
But such a trade-off is very complex to understand with as many rational (e.g. time-saving) as irrational (e.g. ‘feeling good’) criteria.
In 2020 a new criterion for such choices, that is, both rational and irrational, has entered our lives. As a society, we are starting to take environmental impact seriously. Well, many of us are.
Purchasing an electric vehicle illustrates that mix of rational and irrational behaviour. If you’ve researched, you’ll know that today’s electric or hybrid cars mostly have a more harmful global impact than an optimised traditional vehicle. That is partly due to the inefficiency of battery production and disposal. Yet we live in a consumerist capitalist society. So, we know that the only way to get to more efficient electric cars is for the industry to make money from them today. That ensures that they will continue research for tomorrow. As with the good-vs-bad potential uses of the hammer, the ‘greater good’ argument becomes even more apparent.
SMS would never have taken off if it had only delivered an incremental improvement rather than a unique and disruptive communication method.
We must make constant trade-offs as we face our own TEE dilemmas. As we see repeatedly, this can involve accepting a less-than-ideal project as long as it’s a stepping stone to something better.
CDNs represented Progress; do they still?
Content Delivery Networks, or CDNs, can enable websites and online services to offer more engaging content. In the early Web, images and audio became part of websites, and CDNs solved the problem of having to take many minutes to load. They also allowed millions of people to access the same content simultaneously. Today, much more challenging things are available online for which CDNs are needed — for example, very high-resolution video.
CDNs have caches (usually hard disk space on specialised computers) that store content closer to users, so the data doesn’t need to travel worldwide. For the most popular content on a service like YouTube, it makes sense from an energy perspective to add caches worldwide. These devices are typically within your Internet Service Provider’s network. Such an approach offers a more sustainable way to deliver the ever-expanding content that many will want to watch. So, well into the future, there is a level of CDN development that we can consider as Progress with a capital P. However, once the necessary CDN infrastructure is enabled, the CDN ecosystem continues to find other ways of growing sales. That’s still the only way we know how to run high-tech businesses.
Issues that vendors are working on include making load times even shorter and reducing the delay for live streams. These are genuinely cool features for consumers. Are they “nice-to-have” or “must-have” features? To what extent do they, like the SMS, enable something that wouldn’t otherwise be possible? Reducing stream delay avoids hearing “GOAL!” screamed in the neighbour’s flat half a minute before you see it. This “issue” is a real problem once every four years during national team games in the World Cup. Beyond that case, is reducing the delay below 10 to 20 seconds Progress or mere convenience?
Of course, there will always be some niche use cases like live interactive talk shows where low delay times become a “must-have”. But otherwise, the jury is still out in most situations.
In the meantime, to continue to improve user experience, new infrastructure is being disseminated around the globe to be ever closer to where people consume content. We started doing this in a few dedicated data centres, then moved to the Cloud, which physically exists in hundreds of places. Now we are heading to the Fog, which is also referred to as edge-computing. Cloud computing infrastructure is typically in large data centres, often run by companies like Amazon or Microsoft. In contrast, Fog or Edge computing happens much closer to your home, for example, in a cabinet in your building or on the kerb nearby.
This trend creates significant power requirements. When you search online for “Power consumption on CDN”, the top results are all from academic research. CDN vendors haven’t yet significantly invested in this space. I guess because they don’t see it as good for business. They are mistaken, and this attitude needs to change. The ability to recognise where disruption would be desirable is essential. Netflix showed us over the last decade that completely disrupting their DVD distribution business with Internet streaming was necessary. Providing the platform for that disruption was even better, as the Blockbuster chain found out too late.
Another example is with big oil conglomerates for whom the success of renewable energies seems undesirable. Yet the biggest oil companies in the world are investing massively in renewable energy. If you’re going to lose a limb, it’s better to remain in control and choose the surgeon yourself.
CDN stakeholders could embrace energy conservation by creating an eco-friendly label.
Note, for instance, that peer-to-peer CDNs that only use users’ spare capacity are, although not energy-neutral, significantly more friendly to the planet.
For example, we must be careful to avoid greenwashing the issue by just saying that ‘we build hardware with recyclable materials’. That’s not enough. We need to find new models.
In the CDN arena, I’ve heard ideas about letting end-users choose the level of service they want in real-time, and paying for it. That could be a start. An associated idea would be to certify a video’s “green” path through the Internet, perhaps using blockchain technology.
Is Internet streaming or broadcasting better for the planet?
Ten years ago, working with another independent consultant, I compared the energy efficiency of different TV distribution technologies: terrestrial, satellite and Internet. We did some preliminary work and then tried to sell a study to any stakeholders involved. We couldn’t even find the right person with whom to talk. If you’re wondering how the energy footprint compares, expect the usual “it depends” answer. You’ll find different studies coming to diametrically opposite conclusions. It depends on too many factors to provide a simple answer: size and geographic spread of viewership, number of concurrent streams, delinearisation, decoding standby power consumption, etc.
Please let me know if there are any takers now; I’d love to take another stab. Relevant data should be available to regulators and consumers.
And do we need ever-better video and screens?
As a cinemagoer, high tech always attracts me to the movies. I have even been to the cinema a few times, not because of what was showing, but to experience the newest or largest screen in Paris. I’d happily cross town to see a movie with next-generation immersive sound, even if the same film played at my local cinema with plain vanilla audio.
I’m an active member of the Ultra HD Forum, an industry group that promotes the next generation of video technology. It may be part of my job, but I do it because I love it.
TV screens have grown by an average of an inch per year in most markets. To cater for the latest colour and contrast capabilities, the brightness and hence the energy consumption of screens have more than doubled over the last decade.
To decode the newest video formats and run apps, the processing power of TVs is also growing fast. A high-end television has little to envy in a computer.
Video is one of my worlds. I always took it for granted that all its innovations constituted Progress.
I consoled myself that although devices were getting more powerful, hardware was getting more efficient. Ultimately, at least from the energy consumption perspective, we weren’t on a worsening curve.
As TVs get more powerful, they deliver more services, and even if they do so with improving efficiency, the result will still be more energy consumption.
Leaving aside the carbon footprint of making them, the latest TVs can be very energy-hungry during peak usage. Such occurrences can happen when all of the screen is very bright, or the TV is working hard to decode a very highly compressed video. However, the set will also be very energy efficient in other circumstances.
Whether these new energy-hungry video technologies become the right kind of Progress depends on the artists in the cinema and TV production worlds. They would need to understand the unique capabilities like high dynamic range with more colours and contrast, much higher resolutions, screen refresh rates and immersive sound. When they can, they would need to use these new features as an extended or even new vocabulary to enhance their storytelling. This new language also needs learning for live events and sports production. Despite having four times more resolution, today’s 4K football matches using the same shots as HD don’t benefit from the potential of the new, more immersive storytelling that would come with wider angled shots.
Improvements in the latest fancy TVs and associated technologies will be justified, and the Progress they represent will deserve a capital P once artists start talking on them with the new languages of higher resolution or contrast, using immersive sound to say things that couldn’t otherwise be told.
Can Blockchain change the world without consuming so much energy?
In 2016, my sixteen-year-old son suggested we build an Ethereum mining rig. I didn’t know what he was talking about, but we started researching it so that less than a year later, we were mining cryptocurrency. The extra hundred euros a month in electricity cost seemed like nothing. We were generating a profit, and I got increasingly enthusiastic about how the underlying blockchain technology would improve the world. Like most blockchain and bitcoin newbies, videos by Andreas Antonopoulos (aanatop) on the evil banking system truly inspired me. Of the countless stories that fired me, I remember how cryptocurrencies would fix the problem of migrant workers that send their earnings home. They could (and still can) spend up to a month of their yearly incomes on a remittance company. Crypto can knock that 9% fee down to almost nothing.
Being of the X generation, I feel we’re the true digital natives because we tasted a pre-internet world before building the current one. I have always been fascinated with the concept of a digital entity. As I write on my computer, the words already exist in a few different places: on the bitmap of my screen, in the RAM used by the word processor and in caches and temporary backups. Digital entities have never really existed as physical ones do. There are only copies of them. Even if there’s only one copy left, it still has the properties of a copy. It’s always ready to be duplicated ad infinitum and only rarely and temporarily exists as a unique copy. Thanks to Blockchain, now is the first time you can truly own something digital because it is finally unique. The impact of this will take many years to permeate through the whole of the Internet and our lives, but there will be a before and an after. One day we’ll wonder how we ever put up with spam or all the digital fraud surrounding us. One day we could own part of a digital piece of art. With the tokenisation of the economy, we can invest in just that part of a company that interests us. Even as small shareholders, we’ll have an influence.
So that’s just the tip of the iceberg. I became pretty obsessed with it a few years ago, tirelessly explaining Blockchain and how it would change society to anyone who would listen. My apologies if you were one of my early victims. As Bitcoin hit 20k in December 2017, I thought the world had understood too.
We saw people become millionaires or even billionaires overnight. But money corrupts, and if we’re honest, most enthusiasts saw our genuine passion for the technology tainted with at least a small desire to get rich quickly.
I was lucky to find a company in 2018 that, although seeded with some of those riches, was focused on delivering Blockchain’s promise. So I got my head under the bonnet. I’ve mainly been writing about mobility, energy and fintech, and I still need to get my hands truly dirty on implementation projects.
I do not doubt that Blockchain will one day deliver on its incredible promises, even if I can’t say when. Of the many areas Blockchain promises to change, perhaps the most appropriate when discussing my dilemma is how energy distribution can be improved. Blockchain’s secure identity management enables marketplaces where consumers can produce or resell. An intelligent electric car of the future will share excess power back to the grid during peak consumption, or with another vehicle in need, during a traffic jam.
Cryptocurrencies like bitcoin are just one of the countless blockchain applications. By far the most valuable, Bitcoin uses a technique called proof-of-work, where energy expenditure is crucial in bringing security. Enthusiasts go as far as explaining how this will benefit the planet. They will point out that bitcoin mining can be set up to use spare energy that would otherwise go to waste. Bitcoin could even underwrite new renewable energy initiatives where capacity doesn’t always match demand. So, we’d only do the energy-consuming mining when the sun is shining, or the wind is blowing, and we don’t need all the power generated. But those arguments feel like an afterthought, and as I wrote this in early 2020, the global energy consumption of the bitcoin network is equivalent to that of Austria. My TEE dilemma has resolved itself on this one. I believe that bitcoin has to move away from current proof-of-work algorithms or otherwise lose its pre-eminence to coins that use less energy-centric approaches to security. This reservation is also valid for some other cryptocurrencies like Ethereum.
As we are still looking for ways out of the COVID lockdown, Blockchain promises to solve the people-tracking challenge without compromising privacy and securing supply chains for PPE and other supplies. So far, though, despite some valiant efforts, the open-source blockchain community hasn’t been able to respond fast enough.
And so, after this, what might change?
Even if we return to our previous ways with a vengeance once the coronavirus pandemic is behind us, something will have changed. The most eager and venal decision-makers will have heard the birdsong during the epidemic and felt the renewed environmental aspirations themselves or at least through others. They will be more open on an issue like Corporate Social Responsibility (CSR) if nothing else. Social responsibility has continuously been on my radar from a personal perspective and as a political activist. But so far, from a corporate angle, I’ve only seen real engagement episodically.
[MY BRUSHES WITH CSR]
In my early career in IT, I was briefly a trade union rep. It was the early 90s, and I was trying to persuade the CEO to commit some meagre resources to a social issue we were impacting at the time. He answered that our job as a company was to maximise profits, so we’d pay enough taxes so that the state could fix that problem — whatever it was. That answer was un-challengeable then. Even before the pandemic, such attitudes were questionable, but after COVID-19, they most definitely will be questioned. In liberal, free-market societies, beyond greed, there is a whole belief system that growth and profit are the only two real drivers for companies. Anything else is only there to serve those two.
A decade after that incident, I was working for a major Telco. In the early noughties, when CSR had only just left the confines of academia, the voluntary sector and a few small companies began to knock on bigger corporate doors timidly. The iconic Ben & Jerry’s (which reserved 7.5% of profits to fund community projects from the mid-80s) had just been sold to Unilever. I was tasked with initiating our approach to CSR. The Cloud and its energy-hungry servers were not yet an issue, and we didn’t see Telecom’s carbon impact as significant back then. After much soul searching and advice, we rightly concluded that our mission statement moving forwards would be to provide access to information and services. In that context of “information super-highways”, trendy at the time, we thought about CSR as if we were a highway operator.
What was their responsibility to society, beyond the safety concerns that are taken for granted? It can’t be their responsibility if a road took you somewhere you didn’t intend to go. Is that even anyone’s responsibility? What if the road operator knows someone intending to commit a crime takes their highway? In the borderless world of the global internet, there are no laws to determine right from wrong or even what a ‘crime’ is. So, with a worldwide and timeless view, I asked what we could identify as universally wrong. Over time and space, the only issue that consistently came up was in the area of sexual exploitation of one’s own children. Nothing else fits the bill. Needless to say that after months of effort on this, my report and suggested actions simply got buried.
‘CSR manager’ will be a more desirable job in a post-COVID world. I’m told by a US-based specialist that her activity is skyrocketing as businesses scramble to bring social and environmental responsibility into their blueprint rather than a mere footnote of their strategy.
COVID-19 has brought the age-old personal dilemma of doing the right thing versus whatever brings more immediate rewards powerfully into focus for individuals. Organisations, including governments, seem to be affected too. My TEE dilemma has become a gaping hole consuming more of our collective psychological energy.
Green parties the world over have been grappling with this since the 1980s, adding the environmental dimension to the society vs economy debate. Teddy Goldsmith’s insistence that Progress is mainly good is echoed in more recent ideas about Profit not being intrinsically evil either. In one of his presidential campaigns, Obama sowed the idea that an environmentally friendly investment could create jobs and wealth. Green Parties often push the belief that becoming a world leader in energy conservation creates jobs, re-sellable expertise and wealth (yes, they even say that word sometimes). The Green New Deal is rooted in the workings of free capital markets.
Whatever posture they take in the short term, few greens, until now, have been revolutionaries. They have mostly listened and made compromises. Despite being otherwise radical, the XR movement follows social distancing more than most. But suppose our cherished liberal democracies don’t find a way to tackle global warming and our societies continue to polarise. The green movement will be forced into more profound radicalisation in that case.
Sustainable accounting has been promoted for over a decade to reach a sustainability goal (the star in the diagram above). Some states like France, for example, have started to introduce CSR (called RSE in French) into law.
We all know how to do sound business to a certain degree and aspire to this. We all care about the environment to some degree. So, we can all empathise with the difficulty of finding a balance.
A post-COVID-19 world (or maybe for a few problematic years, it’ll be an ongoing COVID-19 world) will leave less room for any corporate head-in-the-sand attitudes.
I’ve stuck my neck out a bit in this post. I sincerely hope it won’t upset any of my trusted clients but will be of genuine interest. I no longer want to bite my tongue when asked to develop or promote something that is a definite step backwards regarding social or environmental impact. Now I will say what I think and offer to explore how to reduce that impact where possible. Still, I’ll always be fascinated by new ways of solving problems and will remain a tech enthusiast — hopefully, an increasingly wiser one.
Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.
Is #8K yet more hype to push TV set sales to unsuspecting viewers or an unstoppable new trend that is already coming? For over a year now I’ve been itching to get to off the fence.
However, ever since I read Daniel Kahneman’s Think Slow Think Fast(thanks for the recommendation @Arnaudb92) I lost faith in expert predictions on any subject including my areas of expertise and especially in my own predictions. However, I’m nevertheless going to stick my neck out because I see so many wrong reasons used to dismiss 8K. I know I’ll look foolish if you out dig this blog in a decade, and 8K is still nowhere, but I’ll take that risk because I believe 8K will be bigger well before then, and many will have joined NHK that has had been running a commercial service since December 2018. Here are eight reasons why:
1. TV manufacturers have always been incredibly efficient at pushing any new tech to consumers (ask any 3D set owner) — this doesn’t imply that the tech is viable, just that the market will try it, if set-makers put enough effort into marketing it, and CES 2019 announcementsand demos confirmed that this would likely happen.
2. Is 8K enough of a differentiator over 4K to justify the expense? From a resolution-only point of view, the enhancement of 4K over HD has a subjectively lower impact on user experience than the move from SD to HD had at the turn of the century. Moving from HD to 8K will provide at least as big a wow factor as moving from SD to HD did in its time.
3. Indeed, resolution is only one of many dimensions that create Video User Experience. So even if alone it does not move the market, user enthusiasm may come with a combination of factors such as High-Frame rate (above 100 fps) and 8K.
4. Even if it takes a few years to reach mass-market, early opportunities already exist in niche areas like, for example, in luxury stores.
5. Screen size and viewing distance are only a blocking point in traditional TV viewing experiences. This issue will recede as growth in average screen size remains unabated at around an extra inch of screen-size per year in most markets.
6. Furthermore, having whole walls made from screen is no longer science fiction. Samsung has been pushing modular screen technology for several years where modules are simply plugged into each other. At the same time, LG has brought screen thickness downto just a millimetre over three years ago so screens can be stuck onto a wall. In this context, overall screen resolution will need to be significantly higher than any single item it displays, including a video stream.
7. Experts are not yet consensual on this, but much of the considerable 35mm film archive around the world can be rescanned delivering resolutions higher than 4k, 70mm film can be rescanned at at least 8K.
8. 3D-video in the living room is a failure many would like to forget. It turned out just too complicated, needing special glasses and new content for a few fleeting moments of a wow effect. More 3D would make people sick. A major driving force that got so many people excited was the immersive effect. If you haven’t yet seen an 8K demo up close, you need to get to a store where they have one. If you just let your senses take over, it is a truly immersive experience. The extreme level of detail gives a sense of depth that regular video cannot compete with. It has the potential to do this for any piece of content for however long the filmmaker wants.
But will 8K offer another hype wave to ride?
Like most industry observers I believe the hype cycle exists, but I have also observed occasions where it didn’t materialise. I became a software engineer in the 80s. Relational databases had taken over the corporate world. In the 90s, Java became the next best thing, well since coffee. It was based on object-orientation (OO), and I expected OO to become the next upwardly mobile hype cycle to ride. I proudly pushed the concept on my CV assuming that was my career path. Nothing happened. OO penetrated the whole of IT, but slowly, without the buzz and hype I expected.
When HD changed the world of video, it was a massive hype generator. It’s looking like 4K is a significantly less potent marketing tool than HD was. I guess that 8K will be even more of a damp squib in terms of hype. That doesn’t change the fact that it will permeate through video workflows, just a bit more quietly.
Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.
Although my first NAB was over 15 years ago, I’m still keen to get out to Vegas this year - OK not for Vegas the place, which gives me the creeps, but to catch up with the people, the trends and the tech. It remains one of my favorite conferences despite its gargantuan scale. Here are some of the questions I'll be looking to shed light on this year.
UHD
When I can get time off the Ultra HD Forum booth that I’ll be busy on, I’ll be looking into how the first generation of mature UHD technologies are doing. The debate as to whether 4K resolution was needed for a true UHD experience was all the rage just after the trailblazers were deploying UHD around 2014. Now that the paint has dried on the static metadata-based HDR solutions (HDR10/PQ and HLG), the battle seems over. Proponents of 1080p/HDR are grinning and claim they have won this round: we are already seeing some such content appearing on Netflix… For me the jury is still out, but I’ll be nosing around on people’s true intentions here.
But what’s next for UHD? I’ll be gaging the readiness of the next set of technologies, and as my friend Ian Nock says, how they might be deployed without breaking what’s already there. In the dynamic metadata space, Dolby Vision is already out there. Does there have to be a winner and a loser with HDR10+ or is there room in the market for both? As an audiophile I’ll be keen to find Next Gen Audio demos and here again fathom whether the existence of several standards (Dolby Atmos, MPEG-H, DTS:X, …) is holding things back.
Encoding
If one of my friends from the encoding space is kind enough to explain to me what's going on, I'll try to catch up on the encoding wars which have confused me with too many competing stakeholders to understand on my own. HEVC was supposed to be represent smooth transition from H264, now I don't know who to believe. The moving parts range from imploding patent pools, to Google, Apple, Amazon and Microsoft without forgetting the streaming people like DASH, H26x, disruptive start-ups, etc. Thierry, help! Decode it for me, tell me what's going on.
VR360
Having just published an eBook on VR360, that doesn’t predict 2018 is the year of lift-off but does explain why it’s the year to get involved, I’ll be eagerly looking how much we got right and if the hype has finally hit bottom, so we can now start to start to do business … I’ll do my best to get to the VR-IF masterclass on day 1 and if I'm lucky get an update from Rob Koenen.
Streaming Delay
I’ve been commissioned to do some work on OTT streaming delay, under the assumption that operators really care about reducing It. I’ve been very surprised that in my investigations so far, this is not the case. Sure, they’d like to reduce delay, but it’s low down their priority list. It’s got me wondering, as OTT streaming becomes more prevalent if the “norm” might, quite a few years from now become a 10-20s delay, where whatever broadcast is left, gets delayed so as to be synced with the crowd … probably science fiction but I’ll test out the idea.
Driven by Data, at last?
When I joined France Telecom (now Orange) in 2001, I remember a meeting where it was explained to me that our unique access to amazing data on subscribers and what they did, meant we would become the kings of data driven UX, data-driven decision making and data-driven just about everything … That vision of analytics was spot on, just too early and focussed on the wrong kind of operator. The Silicon Valley giants now dominate the world with Data and AI (which we didn’t see coming back then). So, have we truly entered the data era where other operators can get some of the pie? The recent Facebook/election scandals seem to say so. I’ll be looking around at vendors in the ecosystem are on the holy data grail. Is the market taking off for real or is it still vendor fantasy?
Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l'utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.
Virtual Reality is here to stay, in gaming at least. But what about video 360?
We're at top of the hype cycle, so you'll hear more and more slack, resist!
No money yet, just another living room 3D, gloom, gloom , gloom, …
Users want a pro to point the camera, …
Current V360 quality (resolution) is still sub-par
... but it's improving so fast and with a great UX already available in the labs it can be with users within a year or two.
However much bandwidth it ends up needing (10-1000Mbps), live V360 will leverage the best networks.
Video 360 will add something, be-it small or large, to operators' product portfolio and differentiation.
Even if it were to stay on the sidelines, VR and Video 360 will affect how we think of user experience, and may well influence content production.
The video tiling technique offers a scalable solution to delivering video 360 at a fraction of the bandwidth. Download this presentation to learn more, and stay tuned for the eBook due before EOY 2017.
Please complete the Email feild to view or download the presentation by Ben Schwarz on Live Video 360 delivery from November 2017