Happy New Year from all of us at Southpoint Films!
The ‘10s were an incredible decade for us – we started our company in 2013, after all! – and they introduced a huge number of changes to the world of video production. When we cast our minds back to 2010, many corporate videographers (including the team here!) were using tape-based professional camcorders, were only just beginning to make the transition to HD, and we were using hot (and energy consuming) halogen and fluorescent lighting equipment, which has now been replaced by cool, low-power LED alternatives. The way we used to work is barely recognisable to us anymore.
Since 2010, we’ve seen the introduction of powerful video production modes on entry-level DSLRs (making them a de-facto video production tool, rather than just for stills photographers), the birth of new “Cinema” cameras that combine the cinematic look of footage from DSLR cameras with traditional video camera features like microphone inputs, the introduction of specialty cameras such as drones, 360° VR cameras and action cameras like GoPros, and video resolutions that are getting increasingly higher, with 4K now becoming the standard, and 8K on the horizon for the next decade.
Additionally, the world of post-production has changed dramatically. Colour correction has been revolutionised by LOG colour (giving editors more control over how a piece of footage looks), Solid State Drives, multi-core processing and GPU acceleration has made our editing computers blisteringly fast (and much smaller!), and the thought of delivering content on optical media (DVD or Blu-Ray) is almost laughable now that we’re ten years further into the smartphone age. The majority of content we produce ends up on YouTube, Vimeo or mobile-only services like Instagram and Snapchat, which both debuted post-2010.
To put things in perspective, Netflix didn’t launch in the UK until 2012. Now it’s ubiquitous in almost every British household. Things have changed so much.
So what’s next for the world of video production, creativity and technology? Well, we’ve looked into our crystal balls and have cast our predictions for the 20’s. Have a look and tell us – do you agree?
Listen to this articleWe covered this topic in an episode of our podcast “Are We Rolling?”
Have a listen below, or subscribe in a podcast player of your choice.
Drones will get better and smaller, but flying them legally will get much harder
DJI released their first “Phantom” drone in 2013, marking the start of the aerial videography and photography revolution. In its earliest incarnation, this drone allowed users to attach a GoPro to the bottom of the aircraft and capture photos and videos during a flight. It was a fairly rudimentary system and required some technical know-how to get it working nicely. Without the right cables, connectors and antennas, there was no way to see what the camera was recording during the flight, so operators either had to do some technical wizardry or just hope for the best.
Nowadays commercial drones include their own built-in cameras, and the technology they come with makes it fairly easy for anybody to fly one of these aircraft. In some cases, you don’t even need a controller; you can make gestures with your hands and the drone will perform actions based on what it sees through the camera. Or, you can set your drone to follow a subject using an app on your smartphone, and it’ll fly automatically around the person or object you’ve selected. It’s incredibly clever, and a huge technological achievement in just a few short years. (Systems like these have been available since around 2016 .)
It’s highly likely that the technological improvement of drones will continue over the next ten years. Drones will get easier to fly, drones will get smaller (as they have been doing), and the quality of the built-in cameras will increase as the sensors get better – but, also, as the computers within the drones become powerful enough to do advanced machine learning to overcome the failings of the tiny optical sensors they’re packed with. (Bigger sensors usually mean better images.) This is what companies like Apple have been doing with their smartphones for several years now; the actual sensors in the cameras are awful on their own, but the computer is doing billions of calculations to try and make the image better once a photo is taken, usually with great results.
However, as great as the technology becomes, we believe it’ll become increasingly more difficult to fly a drone legally. Last year the UK introduced new legislation for registering, controlling and policing drones, which were sped along by the Gatwick Airport drone incident in 2018. (Which I still believe was a co-ordinated hoax, but that’s a story for another time.)
While this new legislation doesn’t make it any harder for a commercial operator like Southpoint Films to fly a drone, as we’ve already had to meet strict operational requirements for many years now (Southpoint Film is a CAA approved drone pilot and we meet all of their requirements for commercial operations), I believe the changes we saw in 2019 were only the beginning of a wave of new laws controlling how these devices can be used.
The authorities are understandably concerned about improper use of drones; there have already been instances globally where consumer drones have been used to carry out terrorist attacks, carry contraband into prisons, and violate the privacy of individuals. There are definitely people who are very much opposed to their existence at all. But this doesn’t help the honest filmmaker who just wants a pretty aerial shot of something that catches their eye – or of something that their client has asked for.
Additionally, as aerial technology improves, airspace is going to become increasingly congested. When a client asks us to use our drones, the biggest reason we have to say “no” is because we would never be able get permission to fly it at the location in question. Usually this is because of its proximity to other air traffic, such as an airport, heliport, or a military base. (A big issue in any major city, including Southampton.)
Many major airports are looking at expanding, as well as smaller ones too, and this will only increase the “no-fly zone” radius for other aerial operators. Combine this with the fact that global corporate juggernauts like Amazon and Uber are also looking at utilising airspace for highly-demanding commercial activities, and it’s not particularly wild to suggest that small video production companies and other individuals will find it very difficult to receive priority with respect to who gets to fly where and when.
The biggest hope is that the sub-250 gram category of drones continues to remain unregulated and that the quality of these drones increases quickly enough to enable small operators to continue flying. Otherwise we’ll have better drones, but no ability to fly them in places of interest. (Unless footage of barren countryside in the middle of nowhere is particularly what you’re after!)
Augmented Reality will provide a platform for Virtual Reality to finally succeed (and AR will be great too!)
We’ve been fans of the 360° VR video format for a long time now, and we’ve worked on some really cool projects that utilise this technology in interesting ways. The only trouble is that watching 360° VR content is a bit of a pain right now.
The best way to experience a 360° VR video is to use a dedicated VR headset. While this hasn’t always been an easy thing to acquire, the past few years have seen some pretty good options appear; Oculus, HTC and Sony have brought several high quality, affordable options to the market. But almost all of these headsets have downsides.
Up until very recently, all of these headsets have required a wired connection to a powerful computer in order to operate, which adds to the already significant price tag that they come with. It also makes them bulky and awkward to operate, especially since many headsets require small bluetooth beacons to be placed around the room to help the headset understand how you’re moving around the space. And, of course, the user needs the space to use the headset without bumping into furniture or knocking things off shelves. It’s all rather demanding – and this is currently the absolutely best way to experience 360° VR video.
The not-so-good ways are to watch it on another device, such as on a computer or a mobile device. On a computer, users can click and drag around the video to change which part they’re looking at. On a mobile device, the user can usually hold up their device and move it around to look at different parts of the video. (Or they can put their smartphone in a mobile VR headset, but these are usually quite low quality and difficult to use.) In most cases, the user also needs to have special software on their device, like the YouTube app or VLC Media Player, in order to watch the content properly.
Because 360° VR content is unlikely to work in off-the-shelf software that comes installed on most devices, it’s always a gamble as to whether or not the viewer will actually be able to watch it with the 360° effect. This really isn’t ideal and, in my opinion, is why 360° VR video content hasn’t truly taken off yet. Which is a huge shame, because the camera equipment for making 360° VR video content is better than it’s ever been, and is a real joy to use, even if niche.
360° VR video reminds me of 3D video. 3D was a very cool technology that was an industry darling in the early ‘10s but had exactly the same problem as 360° video does now. The methods for capturing 3D content were very clever and the industry rushed to adopt support for 3D video in post-production workflows, but the methods for consuming 3D video content never caught on – largely because they weren’t very good. Aside from being an available when watching blockbuster films at the cinema, 3D is largely gone from the public consciousness.
At this point you may be thinking that I’m about to suggest that VR video is dead in the water, just like 3D video, and that my prediction for the next decade is that it disappears, but you’d be wrong. There is a glimmer of hope for VR video content.
Over the past few years, Microsoft, Apple, Google and many other major technology companies have been investing heavily in Augmented Reality (AR). AR is a computing format that blends the digital world and the physical world and the results are already very impressive.
The biggest AR hit of the ‘10s was Pokémon Go, a smartphone game that lets players find and catch Pokémon (imaginary creatures from the franchise) as if they were in the real world. As part of the catching process, players can optionally overlay the monster over their surroundings, as if the monster were actually in front of them.
While Pokémon Go is a fun use of the technology, practical use-cases for AR are slowly appearing. IKEA have a mobile app that lets you place life-size furniture in your home to see how it’ll look if you were to purchase it. Apple includes an app called Measure with their mobile devices that lets users measure items in real-life using just their device’s camera. Google have released an AR mode for Google Maps which shows your directions over a feed from your device’s camera. And, of course, social apps like Instagram and Snapchat are using AR to create their brilliant/horrid (delete as applicable) camera filters that add things like cat ears to a person who’s taking a selfie. It’s all very cool and exciting.
The only trouble is that holding up a smartphone or tablet to interact with AR is awkward. Not only does it make your arms tired if you’re doing something that takes a while to complete (like getting directions), but the experience isn’t exactly “immersive”; currently, it’s more of an “augmented camera” experience rather than “augment reality.”
The next frontier for AR is a dedicated wearable AR device (probably glasses) that lets the user overlay digital content on top of what they’re seeing, as if it’s actually there in front of them, without having to hold something in their hands. Microsoft is already two generations into their Hololens project, which is an AR headset that’s currently being used in some commercial applications. It sounds very promising, but my personal experience from having tried the device at Microsoft’s flagship store in London was that it’s still got a long way to go. Yet it’s a start, and Microsoft are not the only player here; Apple are rumoured to be working on their own AR device which is expected to be released in the next few years.
If these devices can achieve the true promise of AR, which is to overlay digital content on our physical reality at the touch of a button, I think the potential for 360° VR content sharply increases. If we end up living in a world where most people wear AR headsets (which seems likely to me given that it’s a logical progression for blending technology into our lives), I can foresee a future where people will be dipping in and out of 360° VR content like they currently dip in and out of “normal” videos on their smartphones and computers. Imagine tapping a button and being surrounded by an immersive video from somewhere else in the world (or from somewhen else), as if you’re actually there. Imagine instantly being sat on a beach in a warm country, being at a concert, or watching a training video scenario unfold as if you were actually there. I think the potential is massive.
So, will all of this happen in the ‘20s? I think the answer is that it will, or it will at least be close. In a worst case scenario, I think I may be overestimating the capabilities of these devices by a generation or two – but I believe the end-goal will be for AR devices to be able to fully take-over your reality with whatever you choose at the touch of a button. (With your consent, of course.)
Google’s attempt at releasing an AR-style device (Google Glass) to the public in 2013 failed miserably, but mostly because people weren’t comfortable with the built-in camera that pointed outward from the device. (It also looked silly.) The industry learnt from this; I think there’s a reason why smartwatches and fitness trackers don’t have built-in cameras. If a company like Apple, who’ve recently blown the world away with AirPods, can release a dedicated AR device that values privacy, looks good and provides enough value that people will use them on a daily basis, I think AR and, as a byproduct, VR content will be huge.
360° cameras show the future of video production, but will be let down by resolution
I know that this is another prediction related to 360° VR content but, honestly, I think it’s some of the most exciting technology in the world of video production right now. Despite my misgivings about how people watch 360° VR content presently, I think the current crop of 360° cameras and workflows are incredible, and show massive potential for changing how videos will be filmed in the future.
Traditionally, 360° cameras are used for filming 360° VR content. The filmmaker will put the camera down somewhere, let the camera film, then publish the video in a way so that the viewer can choose which bit to look at later on, giving them a 360° panoramic video that they can watch. This is fine, but the 360° VR video format isn’t really taking off yet, as I explained earlier.
Where I see the potential is for 360° cameras to be used in standard video production workflows. Instead of letting the viewer choose which part of the video to focus on, the editor can make this decision instead, allowing the entire video to be crafted in post-production instead of on location. The end-result would be a “normal” video created using 360° footage – and the viewer would be none-the-wiser as they watch it on their phone, computer or TV.
So how would this work in reality?
Let’s say we were filming an interview between two people, with one person asking the questions and another answering them. Traditionally this would require two cameras, one focussing on each person. You may also want a third camera for showing the two people together in a “wide” shot. The editor would then switch between the three cameras to put the video together. This is how it’s done now, and you probably watch content like this every day online or on TV.
Where things get interesting with a 360° camera is that you could theoretically do exactly the same thing with just one camera. You could put the camera between the two people and it would capture what you’d traditionally need three cameras for – it would be able to capture the person asking the questions, the person answering the questions and it would let you zoom out in the edit to see a wide shot of them both too – all with one piece of footage.
The immediate benefit is that this reduces the production cost. One camera is far cheaper than three cameras, and only requires one operator. The bigger-picture benefit is that the editor would have complete freedom over who is in shot, how the shot is framed and they could even add movement if they’d like, such as pans and tilts, which would normally be “baked in” to traditional footage. It gives the editor the freedom that an animator has. Badly framed shots would be a thing of the past. Everything can be fixed in post.
As it stands, this workflow is absolutely possible right now, so it’s not really a prediction for the rest of the decade, but the reason we don’t do it is because the quality isn’t very good. Our 360° camera films an 8K quality video (twice the size of 4K) but this is the resolution of the entire 360° image when “flat”. To watch a 360° video, the image has to be wrapped around a virtual sphere that can be panned around, otherwise it looks like a big, squishy mess, and not like something you’d want to watch.
When the 360° video is being watched properly, the viewer will only see a small portion of it, which is effectively a zoomed in portion of the bigger image. This reduces the quality a lot, meaning that an 8K 360° video will look more like a 720p video on the small area that the viewer is actually seeing. This isn’t ideal when the majority of content being published online is now 4K; it looks a bit low quality by comparison. This is acceptable if you’re watching the video in 360° format as this is one of the trade-offs for the format, but it’s less acceptable if the footage is published as a “normal” video.
My prediction is that, in time, this might not be an issue. Video resolution will continue to get higher as computers become more efficient at handling the extra data and as camera sensors improve. But, by nature of the beast, 360° cameras will be a step or two behind their traditional camera counterparts. Whether or not 360° cameras take off in this way depends on whether consumer demand for higher resolution content ever peaks – which it may do with 4K or 6K or 8K or higher, but may not if technology companies keep pushing higher resolutions on consumers as a selling point for TVs, games consoles and digital cameras.
I’m in two minds as to whether or not we’ll reach that peak this decade, but I don’t see anything more than 4K being worth the extra money, bandwidth and resources needed to create and view it – then again, I felt the same way about 4K when I was still filming in HD, and now I sneer at HD for being the inferior resolution that it is. (Except at home, where I still have a HD TV and don’t feel deprived in the slightest, even though I look at beautiful 4K footage all day, every day.)
But even if the resolution issues can be overcome, there’s still another problem with using 360° cameras in this way.
360° cameras have very wide lenses, which means they have a very deep “depth of field”. I won’t go into the details of what this means, but the current “look” for modern video is for the subject to be very sharp and for the background to be very blurry. With a traditional camera, using a zoom lens to get in closer to the subject helps add to this effect, and makes the footage look really good. (Zoom isn’t the only factor in creating the “blurry background” effect, but it’s the easiest to explain in a post like this.) A 360° camera can’t do this; it needs to be able to capture everything that’s around it, so zooming in will ruin the 360° effect.
In the future, and I believe within this decade, it seems reasonable to expect that 360° cameras will be able to add this “blurry background” effect in post-production, if required. Many smartphones with multiple cameras offer a “portrait” mode for photos, which adds an artificial “blurry background” effect to stills, and it’s inevitable that this feature will come to video recording on these devices as well. In fact, some phones are already starting to offer this, with varying degrees of success. They do this by capturing depth data about what’s in front of the camera, then applying artificial blur to the bits that are far away.
Higher-end 360° cameras like our Insta360 Pro are already collecting depth data to record stereoscopic 360° content (3D, in laymen’s terms), and using that data to figure out what to blur and what to keep sharp wouldn’t be a huge stretch. Ultimately, it comes down to whether or not modern computers can keep up with the intensity of these tasks, which require a heck of a lot of power to pull off (creating realistic blur is really computer intensive!), and whether or not there’s actually a demand for software that can do it. If it’s not commercially viable, it’ll never exist. But, in theory, it would be possible (even now, perhaps) to zoom into a 3D 360° video file and give it blur to make it look a bit more like a shot from a traditional camera.
So the potential is very much there, and the requirement for a flexible video capturing format that can be repositioned and repurposed after-the-fact has never been higher thanks to the wide variety of screens that people now watch content on. Imagine being able to seamlessly create content in landscape and portrait formats just by moving the footage around after it’s been recorded without losing anything from the shot or putting awkward letterboxing on the video. It would be brilliant!
The technology is already here, albeit in a fairly primitive form. It’s now a case of seeing if the tools catch up to let these workflows become part of the standard video production toolkit over the next decade. I’m tempted to say that they will.
The fundamentals won’t change
My final prediction for the next decade (before I spend the entire thing writing this post) is that, on a fundamental level, the world of video won’t change. People have been enjoying watching video content in the “normal” format since the late 1800s, and I don’t think any technological advancement will fundamentally change the fact that people like watching videos as they’re presented now.
3D video, 360° VR video and interactive video (think Netflix’s Black Mirror Bandersnatch) are, as it stands, gimmicks. In the same way that “Choose Your Own Adventure” books never replaced the traditional novel, and podcasts haven’t killed traditional radio, I don’t have any reason to believe that video as we know it today will be usurped by a newer and shinier video format.
The trend of creating videos in different aspect ratios for different devices will no doubt continue until we have the ability to create dynamic video content (probably with 360° cameras or a derivation of them), but even then I think there’s a huge value in the ability to control exactly what the viewer sees in your video. It’s part of what tells the story, and keeps the engagement of the viewer. I won’t bore you with the cinematography theory I learnt at university, but there’s a lot of power in being able to control how a shot is framed, and no filmmaker is going to give that up without a fight. (And, on a corporate level, no company wants to lose control of their message.)
As internet connections get faster thanks to improved fibre infrastructure, 5G, and probably 6G toward the end of the decade, and as wireless coverage becomes more unanimous, the demand for video content will only increase. I’ve banged the drum for the technology enough in this post, but I think internet speed is a major factor in unleashing the real value of 360° VR video, as you need the higher resolution to truly appreciate what the format is able to offer. Slow speeds mean low resolutions, which isn’t ideal for 360° VR video. (It’s somewhat of a digression but I think there’s a huge potential value in 360° video for surveillance and security monitoring, which will only be enabled by better infrastructure.)
On a slightly less positive note, I think video content will face a lot more scrutiny this decade as the ability to manipulate video content becomes easier. We’re already seeing a flood of convincing “deep fakes” appearing online that are misleading people and propagating lies, and I think the process of coming to terms with the authenticity of video content in the future will shine a light on how shaky it’s always been; I think of the example where Disney were caught lying red-handed with their 1958 nature documentary that falsely showed lemmings committing mass suicide, leading the pubic to believe that this was normal behaviour for the animal, which it isn’t at all. That was 62 years ago.
Ultimately, authenticity will come down to the credibly of the author of the content, in addition to the messaging it contains. I think a general rise in skepticism for all online content will encourage businesses to be more careful and diligent about the claims they’re making and, as video technology will inevitably increase in quality (making the barrier to entry for high quality content lower), the ability to create a message that’s honest, true and effective will become more valuable. Which is where a company like Southpoint Films comes in, to help make sure that comes across through video, regardless of how it’s been produced.
So those are our predictions. Do you agree? Do you have your own? Send us an email and let us know. One thing is for certain, though – if the next ten years are anything like the last ten, they’re going to be incredibly exciting. We can’t wait.
If you’re interested in having high quality videos created for your business, please get in touch. We can help with creating videos that sell your products and services, that help you convey information to your customers and staff, and even more. We’d love to hear from you.