Wednesday, December 31, 2014

HFR & 120Hz's Soap Opera Effect: Why We Don't Need It (& How To Emulate It)

Often times technology lurches forward and we get some pretty neat stuff out of it. But with the advent of 120Hz LCD televisions and HFR (High Frame Rate) movies, I see only a step backward. There are times when things need to change, but there are fundamentals that will never need it. I want to try to explain why I think these technologies are highly unnecessary, but can be useful for cinematic effects. However, I will mostly just be describing why I think these are two implemented ideas not worth your time.


What's In a Definition?
I first want to say that there are no true LED televisions. No TV is made of only LED technology, that are actually LCD TV's with one of three types of LED technologies. The only reason I bring up this detail is I will be using "LCD TV" for the rest of the article.

Plasma's Rule!
Plasma TV's were (and still are) awesome. They are argued to have better color-depth, but they also had a few downsides. Two being power consumption and heat related issues, as well as burned-in images. Yet, they still carry a strong advantage over LCD TV's in terms of speed. If a show or movie has high frame rates or quick intense scenes, a plasma has no problem showing that segment as it is meant to be seen. LCD's suffer from motion blur, which can obviously take away from the vision of the film and more literally the audience.

The solution was to implement was is often referred to as the "soap opera effect" on 120Hz LCD televisions. The effect is more appropriately called "motion interpolation", "motion-compensate frame interpoliation", or "motion smoothing". A frame would be created from a current frame and a following frame to make a third frame that would be placed in the middle of the two frames it was created from. This allows for a smoothing effect where blur becomes less obvious or non-existent. 60Hz LCD TV's are not capable of using this technology (thankfully!).

Move Over Standard Frame Rates!
Most movies are shot at 24fps (Frames Per Second). When movies were first created there was no standard frame rate and it took some time before it was decided that 24fps was an optimal standard. Since then it has always been considered the de facto standard when shooting a movie. On television you will see shows and movies at 30fps in the US, while it will be 25fps in the UK. These are different, but not enough to be highly noticeable.

HFR doubles the normal movie frame rate. It uses 48fps for a movie. People who have seen any of the new Hobbit movies can comment on the look of HFR. It is a lot like motion smoothing and both Peter Jackson and James Cameron claim it is the future of cinema...

My Arguments
The main reason I don't like either of these two technologies is solely based on aesthetics. I can get used to them, but why should I when so many people agree that they are worthless technologies. It's not like it's a battle between NTSC and PAL, where PAL had the better recording quality and therefore is better. No, it's just some preference some people think should be pushed on all others because they believe it makes things better.

3D Fad
Before I start in on either motion smoothing or HFR, I want to give an example of what I see happening. 3D is a horrible fad. It has been around a long time and really hit popularity in the 1950's. It later hid itself in educational museums before coming back in recent years.

3D is fun. I remember watching a shark, jaw agape, jump out of the screen as it tried to attack! But that's all it is, fun. It gives no additional emphasis to a story, it does not help develop a character, it doesn't do anything that would make a movie better. Think of a movie like Pirahna 3D (if you haven't seen it, don't), think about that movie without 3D. The only reason that movie made money was because it was in 3D, otherwise, it was just a horrible movie.

3D is a gimmick, there is no other way to put that. It has limited uses and I have yet to see someone use it to great artistic effect. Only then will I be able to say that there is a good purpose for 3D. (In case anyone is paying attention, the shark encounter I had successfully evoked an emotional response, but it was not trying to achieve an artist effect.)

My perspective on 3D is how I similarly view motion smoothing and HFR. They have both been around for a while in various forms, but only now are people trying to push it as some great feature to have. It does not add anything to a show or movie, but it does help weaken them. In time, I can only hope they are seen like 3D, as an optional feature.

Motion Smoothing, Ugh!
The first time I encountered motion smoothing was when my grandparents had gotten rid of their large projector TV for a slim Sony LCD TV. Immediately I was bothered by it. Why did it look so shoddy? Was something setup wrong? Is it just the channel or is everything like this?

It took me a matter of weeks to get used to their TV, but even then it never looked good. The reason people refer to it as the soap opera effect is because it looks as bad as the camera quality that the morning-time soaps use. A great show can look like junk when motion smoothing is enabled.

The sad thing here is that even though it won't cripple a story's content, motion smoothing will detract from it if you are not accustomed to it. It will give the appearance that the show is horrible because the look of the show is now horrible.

I rather have a 60Hz LCD TV and suffer the blur problems than opt for a 120Hz LCD TV and deal with motion smoothing. I have never once noticed any issues with any of my 60Hz TV's during action sequences, quick cutting, or high frame rates. And it always seems like motion smoothing makes people visually hyper-realistic. I always seem to pay more attention to an actor's face and see these blemishes or wrinkles, which starts taking me out of the story and making me lost interest altogether!

You're That Special Kind of HFR
HFR is no better. The way most of us see cinema is as an escape of some sort. There is a disconnect between the movie and reality. A feeling that it is just a movie, nothing more. It can be sad, enjoyable, confusing, exciting, but once you leave the theater reality sets back in. If this doesn't ring true for you, think about this: Why would the lights in a movie theater need to be dimmed or turned off when the movie would be almost as perfectly visible with the lights on?

HFR is said to be more realistic. Who said we need more realism? When 24p (24 progressive frames per second) first came to digital video camcorders, I was ecstatic. 30fps was fine, but 24p gave that cinematic quality we were all looking for. It's a perfect frame rate. It isn't choppy, it's smooth, and it gives just enough surrealism to know that it's fake but has its feet in reality.

There are movies where sequences are shot in 60fps to give a grittty, real-life feel. Many war movies (e.g. Saving Private Ryan) take advantage of this to put the viewer in the movie. A good scene I can recall is from Danny Boyle's 28 Days Later. When the group brings the car into the tunnel and gets stuck, and the zombies begin running towards them at full speed. That scene is shot at 60fps. It is an artistic decision by the directory to help people feel like they are there. The effect is meant to be emotional, heightening awareness and getting adrenaline to release, the same process the characters in the movie are meant to be going through.

But HFR is not being used like this. It is just meant to be seem more realistic, not put you in the place of or alongside the actors. Not to attempt to get the audience to feel something that you only feel in reality. Why does the standard need to change? What makes HFR a definitive pick even if it does need a change? And why do the few decide when the many are the ones who will live with it? These are the questions that need to be asked.

Standards vs. Features
There have been many changes for film and video, many that have been good. From black-and-white to color is a large one. It gave us the ability to see things closer to a realistic perspective since we see in color. But black-and-white didn't go by the wayside. While not often used, it is still implemented for dramatic and artistic effect.

Resolution increases helped increase realism too. Compare a DVD and Blu-ray movie and you can easily tell how crystal clear the jump is. Even just 480p to 720p is an astounding difference. The jump from HD to 4K, 6K, or 8K is just as dramatic. Resolutions of old and new are still used a lot in today's media whether it be on the Internet or from your cable provider.

Letterbox used to be how all shows and movies were formatted for VHS or DVD. It was fine and did right by me. But widescreen started gaining acceptance because of how much more you could fit into the frame. It took some time to become a standard due to letterbox advocates, but eventually it became the new preferred standard. I was one of those who eventually came around to the idea and later had no idea why I had thought it was an issue to begin with.

These are standards, common standards that rarely change, and when they do, it is usually agreed to by the majority to be for the better.

We've already talked about a few features, the obvious being motion smoothing, and another being 3D. They are not set in stone as the only options available, and nor should they be. They can add an element of fun or realism to your viewing, but they do not need to be present to enjoy it.

Audio can be argued as a standard and feature. Many movies boast a standard 5.1 surround sound, while there are movies that can go up to 8.1 surround sound. But TV's are just stereo (2.0) output for the most part. So the standard is really stereo, while the features are varying degrees of surround sound which possess they're own set of standards. Will the surround sound standard continue to increase? I don't know, how many speakers do you really need to hear a movie perfectly? Stereo doesn't seem like it will change as the norm unless people start coming standard with three ears. Mono sound was the predecessor of stereo, but once stereo was introduced in 1984, everyone followed along and it has remained the standard since.

What I'm trying to get at is that something new doesn't have to become the standard. It can be a feature, something that people have a choice in selecting instead of being forced to have. It's a lot like the U2 mishap with iTunes. They're album was free and automatically downloaded from iTunes to your playlist of music. Numerous people were upset because they did not want U2's music forced onto them. I can't blame them, there's plenty of free music out there that I'm not downloading, or being forced to download, and I prefer it that way.

If you don't have motion smoothing on your TV, or you have yet to see a film in HFR, you can still emulate the procedure on your computer to see what all the fuss is about.

Download SmoothVideo Project, a free program that will allow you to experience motion smoothing from any video file. Install the program (I opted to install the MPC player along with every extension setup offered). Open up the MPC player and select a video file, anything you have seen before and know how it should play. Even animation will show the effect, although, to a lesser extent.

Money, Money, Money
While I don't think the motion smoothing was really an influential way to get more money from consumers, it can be argued that motion smoothing was created to get people to buy more expensive televisions that are produced at cheaper rates. HFR, on the other hand, is definitely a way for the movie industry to reign in more money from consumers.

Let's say that HFR does become the new standard in how to shoot and view movies. How will that affect consumers? Just think about the new standards that have come to pass: VHS/Betamax beget LaserDisc beget DVD beget Blu-ray. These were logical jumps as each was an increase in quality, save LaserDisc - which was an increase but never a true standard due to not gaining popularity worldwide despite its superior quality. I want to make it clear that HFR is not an increase in quality, because it's about frame rates, not quality.

However, if it did become a standard, the first thing to happen would be reprints of movies in the HFR format. There has been mention how certain software has problems with the 48fps of HFR, so this may mean not only having to get software that works with HFR, but having to repurchase Blu-ray players that have implemented HFR capabilities. And if this standard came on the cusp of the when the newest, better, digital media format battle began, you would have to ditch all your Blu-ray items and re-buy everything again in the newest medium.

Many of us have already experienced this type of situation with the jump from DVD to Blu-ray, echoing the pattern of history from VHS to DVD. I would hate to have to not only do it for HFR, but again when the newest medium cements itself as the new standard...

If I Had One Wish
Do yourself a favor and skip movies with HFR, or don't buy a 120Hz TV that has some proprietary motion smoothing technology. As I said before you can get used to the awful appearance, but why bother spending money on things you'll only hate? The only way we can make these technologies features instead of standards is to stop supporting them. If we stop buying into them they'll either go away, or more likely just stay as features. When you decide a standard needs to change, that is far different than somebody deciding for you that a standard needs to be changed.

One last thing, I feel like much of this is akin to a specific South Park episode. The one where Cartman defecates via his mouth. Then everybody starts eating through their butt and defecating with their mouth, despite there being no benefit to doing so. As disgusting as that sounds and is, the scenario provided a valid point. Why do something just because you can? If it provides no benefit then why change?

Sunday, December 21, 2014

7 FREE Video Post-Production Programs You Can't Live Without!

This is a followup to my last article on getting Adobe CS2 completely free. There are plenty of free programs out there for video production. Some are helpful, some are not, and some come with the unwanted gift of adware. Today, I will provide you with some incredibly valuable tools that should help you finish creating any video project.

Why You Need Them
Like I stated above, there are plenty of free tools on the web. However, some come with risks, and others just aren't up to snuff. If you want to make a professional video and are on a budget, you will find many of the free tools I will present extremely useful. Of course you can go all out and buy the top, newest, and best, but why do so if you don't have to?

MediaInfo
I have briefly written about this before, as you will see about a few others, but I find this tool indispensable. MediaInfo integrates into your context (right-click) menu. When you right-click on a video file it will give you the option to check its information with MediaInfo. Select it and you'll see everything from codec to bitrates, even audio information.

Why use this? Maybe you have to take a file and reconvert it to another format. MediaInfo would be a quick way to check bitrates and audio information so that you can try to attain the same quality without giving it fluff (extra size for nothing).

Download MediaInfo here.

Subtitle Workshop
This is purely for those who have to do subtitle work. The Subtitle Workshop makes it easy to sync up and add subtitles to any video. It has numerous options that make it easy to do what you want, and you can export to many formats including the standard ".srt" file. (Which is designated as "SubRip" if you do end up using Subtital Workshop.)

There are many reasons to use this. A presentation for foreign people, ways to add notes throughout a video, and even correcting subtitles for a movie. Let's say you own a movie that you want to show to some relatives that are from overseas. They don't understand English and there are no subtitles in their native language. You then hunt down some subtitles in their native tongue to find that they are off by so many seconds when played. Subtitle Workshop can then be used to sync up the subtitles to the actual movie. On a side note, this can be easier said than done if frame rates for the movie are different than what was used for the subtitles...

Download Subtitle Workshop here.

Format Factory
This is another program I have already wrote about, and in some detail. I originally started using this program as a replacement to Super. Super - unless they have changed - now incorporates adware into their installation. Format Factory does not.

Format Factory is a great converter with a great deal of options. Not only can you convert one video file to about any other codec/container, it has some other great features: You can change endpoints, mux video and audio together, and even hard code subtitles into a video (see the program above). It can even help reformat images. As with any good converter, it gives automatic options for converting to different qualities and devices. Format Factory also includes the option to use "multi-threads", which is using multiple cores from your CPU (which I do advise when possible).

The only thing bad I can say about Format Factory is that it's GUI used to seem cluttered, and the newer versions seems far too spacious. In either case, it works great so don't let that put you off.

Download Format Factory here.

Hybrid
Again, this is another program I have discussed in length, but for only certain purposes. This is another converter. The major differences between this and Format Factory are its GUI, ease-of-use, and its ability to convert to x265 or VP9. This is really my only reason to use it. And I really only use VP9 because YouTube already supports it.

Everything else is similar to Format Factory in that you can convert to just about anything. It cannot edit audio and video quite like Format Factory, but that's not a converter's job. The GUI is fairly basic so you need to input your own values for everything. It also has multi-threading support, which is extremely useful if trying to convert to VP9 with Opus.

Just to give a good example of why anyone could need this, I live in a country where the Internet speeds are notoriously slow. If I upload a video (in great quality) to YouTube, this can take up to 18 hours for just 3 minutes! With VP9 and Opus codecs I can drastically reduce the size of the video while maintaining its quality. It then will take only an hour or two to upload to YouTube.

While Hybrid may not have the glam of Format Factory, it becomes very useful for using newer codecs. Check out my article about it for a music video I did. I even used it to convert the same music video to VP9 and Opus in 4K [this is not in the article but it can be accessed from the same YouTube channel]. One further downside is that in order to use multi-threading support for VP9 you must have the quality reduced from "Best" to "Good". I personally found no difference in the quality, but others might. On the flip-side, converting using "Best" take hours upon hours to complete...

Download Hybrid here.

GoPro Studio
The free edition of GoPro Studio is bountiful. It provides you with editing and color correction tools. While some may want it for this alone, I have never used this side of the studio as I prefer Adobe Premiere Pro. But what GoPro Studio has that no other editing suite does is convert footage from 8-bit to 10-bit AVI files.

Originally, this software was part of CineForm's NeoScene. But GoPro bought out CineForm and put it into their studio. It used to cost $200, but now it's free! I will explain what it does from a NeoScene standpoint, as I still use that for converting files. If you have 8-bit video, that means your video was captured using 256 different colors. 10-bit video uses 1024 different colors, meaning it is four times better than 8-bit in terms of color. Converting 8-bit to 10-bit will not increase your bitrates, but it will increase the color palette, which is great when dealing with color grading. This will help ensure that you can get the best color correction possible from any 8-bit footage.

GoPro Studio's standard version is free, so there should really be no downside, but I can think of at least one... When GoPro discontinued NeoScene and used it in their GoPro Studio, they did not carry over the ability to use convert AVCHD footage. While many may not have a problem with that, people with cameras like the hackable GH2 (myself included) will not be able to use it. This is also why I continue to use NeoScene instead. You can read more about that here.

Download GoPro Studio here.

DaVinci Resolve Lite
There are many color correcting programs out there: Speedgrade, Synthetic Aperture, Scratch, Smoke, and even the discontinued Color still packs a kick! But none of those are free like DaVinci Resolve Lite. The free version of DaVinci Resolve only has a few major differences from its paid counterpart.

The Lite version has no support for anything relating to noise reduction, 3D, or networking/database abilities. It also has no support for 4K video export (but it can play 4K video), or support for "power mastering". There are a few other differences, but nothing your likely going to need to know unless if you run a post-production facility. But for a full list go to this link.

I recall my first real sighting of DaVinci Resolve in use. I was in Thailand and got to meet the head of the largest production company in the country. The company was quite interesting as they incorporated every part of pre through post production possible, not something you see in Hollywood. One of the last rooms I peered into was a colorist working in a theater, by himself, on a film project using a huge control surface. Quite amazing to see someone working on a 2K screen of that size.

I believe that the people behind DaVinci Resolve realized that while they had an outstanding product, they needed to reach a wider audience or fall risk to becoming purely a niche product. This further runs the risk of being shunned from future generations who have no familiarity with DaVinci Resolve. In a surprising decision, they started creating free versions for the public. Even more surprising was how versatile the free version is in comparison to the paid version.

This is a spectacular program to use. If you are a beginner - meaning no post-production experience at all - this will be a steep learning curve. If you are an editor trying to color correct, this will still have quite a learning curve but not to the extent of never using an NLE. My best advice is to learn how to edit fluently first in at least one program. Once you have a good grasp on your style and the program, begin using the available color grading tools in that program.

I started color correcting in Sony Vegas. After some time I understood how things did and could work, which allowed me to move onto more complicated yet more powerful software. While I can't say it is easy to jump from one color grading program to another, I have done it numerous times and needed very little time to get up-and-running. Getting to "know" a program is a different feat altogether, but having the basics down of the skills needed is what is important.

This is a professional, free, color grading program that can make a huge difference for a video project. I find it complex, but gratifying. The only issue you may run into is how to use it alongside your video editing software. There are a few main options, but you should research this on your own and determine what workflow works best for you.

Download DaVinci Resolve Lite here

Fin
I am sure there are plenty of other tools that are useful, but I find myself always coming back to these time-and-time again. I enjoy free products, but I always prefer the free products that surpass my expectations and supplant themselves as my favorites among paid software with similar capabilities. The tools here, and those in my last article, should give you enough for an entire professional post-production setup.

Thursday, December 18, 2014

FREE Adobe CS (Creative Suite): Mac & Windows!

This article started out a just a small part of my next article. I wanted to share some free programs that were useful for video amateurs and professionals, but I realized that getting Adobe Creative Suite (CS) for free would be so breathtaking it would take away from the other programs on the list. Instead, I will just be discussing Adobe CS and putting the other free video-related software in its own article.

WARNING: This is not actually freeware, Adobe has made it abundantly clear that this is only for CS2 users who legitimately owned the product(s). If you download and use anything, it is at your own risk...

Adobe Creative Suite
This is the motherload, what you clicked for... Adobe Creative Suite, completely FREE!

More than likely if you found this article you could care less about anything else I have to say on the subject, but I do what to give a bit of background. What you might be thinking is am I some how trying to scam people, or trick people with rebates and other discounts. No, this is not a trick. This is real.

The one minor thing I neglected to mention is that this is for Adobe Creative Suite 2, so I am sorry if you came looking for any newer versions. On the plus side, this suite is just as capable as the newer ones and is from Adobe. Adobe has discontinued support and the server for Adobe CS2 products, given its age. They try to stipulate that they do not run on "many" newer operating systems, but that simply isn't true.

You will need to have an Adobe ID if you do not already have one. The link near the bottom will take you to a login page, once successful, you will be able to download Adobe CS2. There are several languages available, but only English, French, German and Japanese have everything available. The entire suite can be downloaded, or just the programs you need. There were two versions of CS2, standard and premium, and the site will allow you to download all the premium additions as well. Each program comes with a legitimate serial number. Each serial number for Mac or Windows is the same on any other language.

One last piece of information, not every product is available, such as Version Cue. Not a huge loss considering it was just a way to collaborate with designers over the Adobe network. ImageReady might be included with Photoshop.

The Bad
Adobe CS2 is a much older suite, and its successors have done a nice job on improvements, that being said I'll give a few drawbacks:

There are certain tools or items that are not present in this version as compared to newer versions. This should hardly deter anyone from passing up this free offer though. Many techniques can be performed with a bit of extra effort if using CS2. I'm thinking specifically about Photoshop, where there are so many tutorials on how to do things for older and newer versions. Another example is if you edit video there will obviously be less codecs and containers available to choose from. There may be aftermarket versions of codecs and containers you can install, but worst comes to worst, you can always export an uncompressed AVI and then use free converter software to change the video to a codec and container that you need.

There are missing programs you can only get from later upgrades like Speedgrade, Encore, or Flash. Now Flash I wouldn't see as a big deal as there are plenty of other tools to learn with, and arguably better at this point. Speedgrade is just a color grading program, and I found DaVinci Resolve Lite just as powerful and more to my liking. And something like Encore, while quite useful, can be replaced with other (free) alternatives. In addition, it only has DVD support, and with Blu-ray here to stay, it will eventually be phased out. However, programs like Fireworks I love because I could easily and quickly optimize web images. And if you're an Edge user, it's too bad. There is also no Prelude, but even CC doesn't have that.

One other mention is that while it works on newer systems, there are noted issues. Adobe CS2 is all 32-bit programs, and some people have complained about Photoshop not working on Windows 7 64-bit. However, solutions like these should fix and allow Photoshop to work smoothly. There could be other problems, but I think this is the most evident.

The Good
Now I'll cram in a few good things about this, beyond being free:

Audition is available! Many people hated the switch to Soundbooth for Adobe Creative Suites, so many still used older Audition versions because of its features. Soundbooth was meant for a crowd where audio was not their forte, but the uproar it caused in the community caused it to finally go by the wayside in 2011. Regardless, Audition is still a great program, and I have even read that some people continue to use this version because of how good it is.

It has GoLive and Acrobat 3D, which are not on any newer versions. GoLive replaced PageMill, which was the primary HTML editor, and was later discontinued in favor of Dreamweaver. I actually started with PageMill, so I could probably find some use for GoLive. Acrobat 3D can read most CAD information so you can import it into Acrobat 3D. This could definitely be useful for people building projects with CAD programs. Newer Acrobat versions have the functionality built-in, but that is of course at a cost (not like the free Adobe Reader).

The formats are still readable by newer versions. Of course there are some newer formats, but newer versions are backwards compatible. So, even if you have to send something off in an "old" format, it will still be readable on newer versions. The opposite is normally untrue...

Not a huge point for CS2, but there are older plugins or extensions that no longer work on newer versions that will here. There a lot of them for Dreamweaver, Premiere Pro, and even Photoshop. Some newer items should still work in CS2, such as brushes or font families.

The Download
Download Adobe CS 2 here.

Why Adobe? Why?
Why did Adobe really make this available? The official response I have read is that they wanted CS2 users to be able to still keep their legitimate product when working on the Internet. The serial would otherwise be checked and considered invalid. This was their way of making sure that CS2 users would not run into trouble since the server was going to be taken down.

Unofficially, it seems like it could be more. 7 years is a long time for the life of a product, especially one that gets updates almost yearly. My only thought is that Adobe wants the upper-hand over other products of similar use like Final Cut Studio or Vegas Movie Studio. The software is old, but still useful. And as their focus is to remain with CC, what's the harm in letting it go for free? If anything, it could drive more people to Adobe's suite. People just getting in the game who end up using and liking CS2 may decide to upgrade to CC. Not a bad strategy, and makes Adobe look better in either case.

EOL
Just because a product has had its run doesn't mean it is worthless. I would think that CS2 would be great for users who are fresh out of college, where they had CS6 or CC available to them, but can't afford to buy them now. In general, it is good for anyone who wants to make legitimate work and does plan to upgrade in the future.

Of course, this is only meant for licensed users, so again be forewarned there could be consequences whether for business or personal use.

Wednesday, December 17, 2014

GTX 970/980, More CUDA Cores Than Ever Before! (Proof)

When I first read about the GTX 970 and GTX 980, I was amazed at their performances given their prices. It was also the first time I was more than willing to get a second-tier card (GTX 970). But then I saw the CUDA core counts and my mind began to drift back to a GTX 780. I then realized that the 900 series actually offers more CUDA cores than what is being listed...

CUDA Cores
CUDA cores are useful for certain applications. There are many that take advantage of them and can help accelerate performance in said applications. Here is a list of some on NVIDIA's official site, and here is more on NVIDIA's site geared towards engineers. To most gamers, I highly doubt they consider this important, if they find a use for it at all (other than bragging rights). But for people like myself, who use products like Adobe Premiere Pro, CUDA cores can make all the difference.

Breakdown
This requires a breakdown to understand the reasoning why the GTX 970 & 980 would actually be a good upgrade from a purely CUDA cores standpoint. First, I will list some cards and their CUDA core counts:

GTX 480
CUDA Cores = 448

GTX 580
CUDA Cores = 512

GTX 680/GTX 770
CUDA Cores = 1536

GTX 780
CUDA Cores = 2304

GTX 780 Ti
CUDA Cores = 2880

GTX 970
CUDA Cores = 1664

GTX 980
CUDA Cores = 2048

Note: The GTX 770 is a GTX 680 in wolf's clothing.

Now, at first look, it would seem that there were no large leaps that would prompt a need to buy a GTX 970, depending on what you have now. And in fact, some leaps would be in the opposite direction!

I initially had a GTX 480 and didn't upgrade until I could get a GTX 680, which is an extra 1088 CUDA cores difference. But for me to jump to a GTX 970 would mean only 128 CUDA cores more... Definitely not the jump I had from a GTX 480 to a GTX 680, and certainly not what I expected from a GTX 970. At best, the GTX 980 would be better, albeit at a much larger cost.

Calculations
But wait, the numbers are lying to us. The Maxwell architecture claims that the CUDA cores of the GTX 970 & 980 are 40% more efficient! So we need to calculate all the cores, save the 900 series, for their new true values in comparison.

We leave the 900 series alone, as they are at 100% values. Because they are 40% more efficient, we subtract those two numbers to arrive at 60%. We then multiply any other series by 60%, because those are at most 60% as efficient as per one CUDA core from a 900 series. Or, each CUDA core in any other series is equal to 0.6 of its listed value.

If we do the math and multiply all the CUDA core amounts by 60% we get:

GTX 480
CUDA Cores = 268.80

GTX 580
CUDA Cores = 307.20

GTX 680/GTX 770
CUDA Cores = 921.60

GTX 780
CUDA Cores = 1382.40

GTX 780 Ti
CUDA Cores = 1728

GTX 970
CUDA Cores = 1664

GTX 980
CUDA Cores = 2048

The calculations make it apparent that the 900 series are truly a leap in CUDA cores if coming from a GTX 770 or below. If I happen to replace my GTX 680 with a GTX 970, I will get an additional 742.40 CUDA cores. Almost 70% of how many CUDA cores I gained last time. Which is a good increase, especially coming from the second best 900 series card (as of now). Trading up to a GTX 980 would net me 1126.40 CUDA cores, a bit more than my original jump!

GTX 780 & 780 Ti: The GTX 780 is not quite up to a GTX 970, but it is 281.60 CUDA cores away. I am not sure that I would consider this a big enough increase in order to take the plunge. It's really a toss up. If I were contemplating the GTX 980, then I would say it is worth it. As for the GTX 780 Ti, it should be better than a GTX 970, but not as good as a GTX 980. It obviously would not be worth the downgrade to a GTX 970, and in all honesty, I don't think the GTX 980 would be a smart upgrade either.

By Far
This the shortest article I have ever written to date. At first it was meant to be part of a post about why to get a GTX 970, but secretly all I wanted to do was write about this CUDA cores secret. There are very good reasons to buy a GTX 970, but I'll probably wait until I get one before I finish the original article.

Hybrid Physx, HyperSLI, Different SLI, Hybrid SLI/Crossfire, & SLI with Crossfire!

I was trying to think of what I should write next as I was perusing for deals on graphic cards. Then it hit me, why not discuss some interesting tricks that can be done for SLI and Crossfire! If you don't know what those are, think of it as RAID for graphic cards. And if you don't know what that is, then check out some of my other articles, or just read on...




WARNING: The techniques below do involve risk, I am not responsible for any mishaps.

SLI/Crossfire
ATI (now AMD) and NVIDIA were always the big contenders in the graphics card realm. Generally speaking, NVIDIA brings the most power, but AMD brings in better performance for their price tag. The battle between the two rages on as consumers fight amongst themselves as to who is better and why!

Along the way some new technology came about where you could connect two (or more) graphic cards together. If you connect at least two AMD/ATI graphic cards it's called Crossfire. If you connect two NVIDIA graphic cards it's called SLI (Scalable Link Interface).

Crossfire is now officially CrossfireX. There is some confusion with this name as CrossfireX was normally mentioned in relation to mobile platforms prior to the official change, and at times was also confusingly defined by some as four or more graphic cards in an array. As such, I continue to use Crossfire for just desktop graphic cards of two or more.

SLI and Crossfire offers better performance, mainly for games. It can offer better performance in applications, if those applications take advantage of two GPU's at one time, which seems rare. But one example is Adobe Premiere Pro. It can use SLI when rendering out video. Traditionally, graphic cards would need to be connected by physical "bridges" and had slots for these bridges. As of now, only AMD graphic cards can have bridges without a physical connection.

Hybrid Physx
This is probably the least useful trick of the five I will be discussing. Not because it is boring or difficult to accomplish, but because there is little use for it. Physx is proprietary to NVIDIA and used in some games to help increase aesthetics. (There are also applications that use Physx, but I believe that's even less popular than building games with it.) If you've played any of the newer Batman games on the PC, one difference with enabling Physx on a NVIDIA card would be the smoothness and wrinkling of Batman's cape as he moves about. It is a nice complement for details, but I have never found myself using Physx despite my favoritism for NVIDIA cards.

Anyways, Hybrid Physx is a way to connect an AMD card to a NVIDIA card and increase the performance for Physx games and applications. On older NVIDIA cards and AMD cards, this seems to work quite well. There are mixed results for newer cards, but it can be done. I even read about someone getting a GTX 680 and a R9 290 to work together, which oddly enough, I have.

The titles will have the links for the full instructions, but I will give the essential requirements in order to get them working:

Older NVIDIA Graphic Cards

  1. First install NVIDIA drivers ranging from version 258 to 285 for the GeForce 256.
  2. Then download and upgrade the Physx SS driver up to version 9.11.0621.
  3. Now download and install the Hybrid Physx Mod 1.05ff. This should automatically patch the necessary files to get Hyrbid Phsyx to work.
  4. Optionally, you can use the command line included to set a desired configuration, which I assume they mean display.

Newer NVIDIA Graphic Cards

  1. First download and install PreHybrid for older games
  2. Then download and install either NVIDIA 320.49 drivers for GTX 500 series and below, or NVIDIA 314.22 for GTX 600 & 700 series. These drivers do not include Physx, which is why they are specifically used.
  3. Download and install the latest AMD Catalyst drivers.
  4. Download and install Physx driver version 9.13.0725.
  5. Download and run Hybridiz.
  6. Delete numerous files in the game directories.

Note: The Hybrid Physx for newer NVIDIA cards will only work on Windows 7/8, and can only work on games that use Physx 2.0 (not 3.0).

For newer cards there seems to be a lot more work involved in getting everything setup. There are multiple files to download and delete, and if you upgrade your NVIDIA drivers (Physx will not upgrade if already present in the system), you will need to rerun Hybridiz. There is also the chance of things crashing if not done in order and accordingly.

Again, I find hybrid Physx rather useless for so much work. Any game with Physx will run happily without it. I have never been so saddened by a game's graphics (that has Physx), and thought to myself, "I need Hyrbid Physx". If anything, I would more likely just buy another identical NVIDIA card to SLI with, and increase the Physx performance that way. But it is a cool experiment to try if you're bored and have some time.

HyperSLI
HyperSLI is the ability to enable SLI on motherboards that do not officially support it. Originally, this innovation was brought to the public by Asrock. They had created a patch for their motherboards with a certain chipset. The team at NGO took the patch and modified it to work for any motherboard.

I personally performed this method on an ASUS Crosshair III motherboard. I had bought two NVIDIA 560 Ti's not realizing that the motherboard only supported Crossfire. So I went on a hunt to see if I could enable SLI, and luckily, I could!

The newer versions of this are done by a different team and they call it HyperSLI. It is just as simple (if not simpler) to use, and should be just as successful. The process is quite simple and will take very little time to complete:

  1. Ensure that your CPU supports virtualization technology.
  2. Ensure that your BIOS has virtualization enabled.
  3. Download HyperSLI 1.0.
  4. Open HyperSLI.
  5. Push the button to enable SLI.
  6. Reboot PC.

NVIDIA requires at least two x8 PCI Express slots to officially offer SLI support. Motherboard manufacturers can often save money by offering one slot as a full x16 and another at x4. So, there are still plenty of motherboards out there with just Crossfire support, and if you happen to have one, this will almost certainly work!

Different SLI
Different SLI was actually created by the same team who created HyperSLI. ATI always had the added benefit of being able to Crossfire between other graphics cards (not all, but those that use the same architecture) despite their models. NVIDIA never grasped the concept and forced consumers to buy similar models [meaning same model but with the "choice" of a different manufacturer].

Different SLI has come along to change all that. Different SLI allows you to take two different model NVIDIA cards and put them into SLI. Initially, the theory was that only NVIDIA cards using identical GPU's could successfully perform Different SLI. There were some tests to support this, while there were some that did not. At this point, it has been determined that the hardware ID's have to match, or rather the three specific characters.

This really opens the doors for many NVIDIA users. There are numerous cards that can be SLI'd. If you have a cheap card, you may be able to upgrade with a better card and get SLI. If you have a new card, you may be able to get an older, cheaper one and get SLI. In any case, there is a good chance you can get that additional boost in performance that SLI delivers.

Here are the steps you need to know, including preparation, in order to perform Different SLI:

  1. Check the first post here for your card and what other cards match it for SLI purposes. Look at your graphic cards first three characters/digits after "DEV_".
  2. Compare these to the graphics card you wish to try. If they are a match, then they should work together.
  3. Install the graphic cards into the computer.
  4. Connect the cards by a SLI bridge. If you do not have a SLI bridge you can buy one from Amazon. But if one of your cards does not have an area for the SLI bridge, you are out-of-luck.
  5. Download Different SLI 1.0.
  6. Unzip the folder and start Different SLI.
  7. Press the "Patch" button.

NOTE: GTX 970's & 980's hardware ID's match, so they should be able to SLI together.

You should fully know if a card you plan to get, or already have, should work with Different SLI before trying this. There is no reason to run out and buy a card that you come to find out simply won't work. Just as there is no reason to try Different SLI on two cards you already own if they are not compatible.

This is by far the most interesting item to discuss (for me). I have been wanting to try this for some time, but have either not had the funds or not had the resources available to try. When I first found out about this method I had a GTX 480. The GTX 580 was incompatible (in either theory), and I already had SLI, so I moved on.

But now I use a GTX 680 and would love to give a GTX 770 or a GTX 690 a try; just to see if I can do it, and how good it can do if so. With a GTX 770, it should react more like a GTX 680 or 770 SLI. However, a GTX 690 would be like a GTX 680 Tri-SLI setup. Others will argue more like a GTX 670 SLI with a 680, despite the amount of CUDA cores is exactly double that of a GTX 680, and a GTX 690 tests near-identical to that of a GTX 680 SLI setup. It would be even more interesting since a GTX 690 can be said to be about on par with a GTX 780 Ti or a GTX 980, depending on the settings and game. I would really like to compare benchmarks between a GTX 680 & 690 SLI versus a GTX 780 Ti SLI or GTX 980 SLI. But I digress...

Hybrid SLI/Hybrid Crossfire
Hybrid SLI or Hybrid Crossfire refers to having a discrete GPU (a GPU not part of the motherboard or the CPU) that can link with the graphics chip integrated into the motherboard or CPU. This is not to be confused with NVIDIA's Optimus or AMD's PowerXpress on mobile chips (laptops and notebooks). These features actually just switch between either the discrete graphics card or integrated graphics depending on the GPU load.

While originally this was used with motherboards that had integrated graphics, this is now largely related to CPU's. With the popularity of CPU's incorporating integrated graphics, there has been less of a need to have motherboards with integrated graphics. I also wanted to briefly mention that AMD refers to their CPU's with integrated graphics as APU's (Accelerated Processing Units). Yet the definition for an APU would actually apply to Intel CPU's as well.

Just for a bit of history, and for my next thought, the idea of combining two graphic cards was and is called hybrid-graphics. However, this is about switching from discrete to integrated graphics, not combining the two.

Intel still has hybrid-graphics, but NVIDIA no longer supports it. You should be able to downgrade your drivers and do some tweaks to get them to work, but you may have a performance impact for newer cards. Intel also has two types of hybrid-graphics: Fixed & Dynamic. Fixed determines which graphics source to use based on power. Dynamic determines which graphics source to use based on GPU load. NVIDIA does still have an official page with Hybrid SLI information here, but you'll notice that it hasn't had an update for newer cards in some time...

AMD offers Radeon Dual Graphics, which can be turned on in the BIOS when using a AMD discrete graphics card. This is considered Hybrid Crossfire and can give an added boost when necessary. This should never be considered a better solution in comparison to a true Crossfire, however.

SLI with Crossfire!
I recently came across a video on YouTube where a group has apparently done the impossible. They have SLI and Crossfire in the same rig! Now, unfortunately, this isn't SLI in use with Crossfire. This is just having both technologies in the same computer, which is still unheard of.

Evidently, you just plug up a monitor to the respective setup and change the primary display to whichever setup you want to use. The price quoted from the video is $5000. Not too bad for a special gaming rig, but too steep for my blood.

A couple things of note: I hate it when people discover new ways to do things and do not share it freely with the public. Yes, making money off an original (working) idea is great, but all of the other techniques I have listed are and always have been free. Then again, I could barely fit three cards in my current rig, so no real loss there.

Another thing is that I have been trying to deduce how they could have done this and I think the most obvious point is Crossfire. After looking at the video, you can tell that the cards are really snug. Crossfire does not need a physical bridge (if you are using a R9 285 or higher), and because of that it has made it all possible. Without that one feature, this video would not exist without some sort of custom bridge connectors.

I imagine that the drivers are where the meat of this method lies. At first I was thinking, maybe the tweaked the drivers to work properly, or maybe they have their own special software running. But I then began thinking about some of my own testing I had done on some AMD R9 290's I bought last year. My desktop was, and is still, using a GTX 680. So, I already had the NVIDIA Control Panel installed. I didn't want to bother uninstalling and reinstalling it (as I wanted to keep using the GTX 680), so I just installed AMD Catalyst alongside it. The only problem that ever occurred was that when the GTX 680 was the only card in my computer, I would get an error message from AMD Catalyst stating no AMD graphic cards were present.

A long time prior to that, I had been trying to SLI two GTX 480's. I was actually trying to setup a Tri-SLI setup, which led me to a lot of random testing. On this motherboard, it was recommended to have a single graphics card in the top (first) PCI Express slot. It was also recommended to use the last (third) PCI Express slot if the first had any issues. I did end up trying the last slot with a single graphics card during one of my trials and was successfully able to use my monitor. I believe I was even able to use the middle (second) slot without issue...

Why are these situations important to know? Well, first off, this means the two sets of opposing drivers can be installed and used without conflict. The second realization is that there can be more than one primary slot for a graphics card. Recall that in the video it is stated to switch from SLI to Crossfire or vice-versa you simply need to select the primary card in the settings for your display. What I then imagine is that having dual displays is actually unnecessary, since only one works at a time. Meaning that you could have one display plugged in one setup, but could then take out the plug and reinsert it into the other setup when wanting to switch. You would then change the primary display card and still have the same effect. Two displays just makes it easier (lazy!).

So, what I'm trying to get at is that I don't think there is really trick involved. As long as you have newer AMD cards that do not rely on physical bridge connections, and a motherboard that supports 4-way SLI/Crossfire, you should be able to accomplish this without any modifications or hacks. Unfortunately, I am currently unable to test this theory. I do have enough graphic cards for Crossfire and SLI, but the motherboards at my disposal will do 3-way SLI/Crossfire at best. There is some sort of legendary add-on board that should allow 4-way, but that item has never surfaced to the public.

Farewell
This was an enjoyable article to write and share. My hope is that people stumble across it and think, "I didn't know you could do that!" I have a lot of friends that don't even know much of this is even possible, part of that may be that they don't have an investing interest. But I do hope that if you read this you will find the information interesting if not useful.

Thursday, November 27, 2014

How to Get the BEST 4K Video Out of a 1080p Camera!

I've held off on this article as I did not feel the time was right, but it has become too evident that 4K is the next big thing for TV's and digital video recording. Many of us are lucky enough to own camcorders or cameras that output 1080p quality video, but now that 4K is becoming increasingly popular, should we toss those devices aside and purchase a 4K camera? I would like to argue that (for now) 1080p can suffice for 4K videos, and I would like to show you how and why...


Back in the Day
Just before I was about to be a sophomore, I switched high schools. I was late to the party so I only had a few choices for my electives. I opted for a video class that I thought would be easy; boy, was I wrong!

The former teacher was known to be very lax and the video class had been easy. But the current teacher wanted his students to really learn how to create video projects at a high standard. Before you were allowed to even touch a NLE (Non-Linear Editor), you had to learn how to edit with VHS. While not as hard as editing 8mm with a film splicer, it was still an arduous task. Once you had done one project on a VHS editing bay, you could then move onto a NLE.

When I first sat down in front of a computer (loaded with Adobe Premiere), I didn't know where to start. I could open the program, but that was about it. So I pulled over a classmate and asked them one question: Which button cut the video. From then on I educated myself on all the skills and technical aspects of production and post-production.

I was fortunate enough to get into digital video just before 24p became a huge revolutionary way to capture and output video. For the younger crowd, it was much like the jump from SD to HD (1080p). For newer crowds it will be akin to 1080p to 4K. I wouldn't say that things were simpler back then, but because there was less to contend with, it did give a sturdier background for digital video and all the new features that would start to arise.

Originally I wanted to do something in computers, but since that first class in high school, I have continued my studies and experimentation in digital video. I even went on to get a degree in film and media. That's not to say I forgot about computers altogether, otherwise this blog would not exist.

1080p vs. 4K
1080p is a video resolution of 1920 x 1080. 4K is a video resolution of 4096 x 2160. There are other 4K resolutions, but I won't get into that here (check "14 Things People Get Wrong..."). 4K is roughly 4 times larger than 1080p. You can check this by multiplying the numbers above then dividing the total for 4K by the total for 1080p. In pixels, the comparison is about 8 million to 2 million. This should help you understand why 4K is such better quality than 1080p, even without having a TV that will allow you to test that theory.

1080p vs. 2K
2K is a video resolution of 2048 x 1080. Again, there are some other resolutions, but I will not be getting into them. 2K is just barely taller than 1080p. 2K is the resolution used in most cinemas (there are some cinemas that do show movies in 4K).This may help understand why the need for cameras, camcorders, and TV's to have 2K never really came to fruition.

Be Forewarned
The best way to get 4K video is to obviously have a 4K camera or camcorder. A 4K smartphone may be cool, but it is not a good way to get 4K footage (even with hacked bitrates).

The only camera I know that can be hacked to become a 4K camera is the Sony F5, but the camera alone still costs thousands upon thousands of dollars. It would be better to get something like a Panasonic GH4, especially now since there are rumors that Panasonic may discontinue the line...

But if you can't afford those options, then follow along below.

Preparation for the Transformation
There are a few things you can and should do in order to make the best quality out of 1080p video footage.

The first is a pre-production step. If you own a Canon EOS, a Panasonic GH2, a GH1, or any other camera that can be hacked for higher bitrate, you should enable those settings. You will need to spend some ample research on how these are performed, how they can react, and how to reverse the process if needed. For an applicable Canon, you can use Magic Lantern, which will give you a setting to increase the bitrate. The process is easily reversible by just pulling out your battery. On a GH1 or GH2, the process is a little more in-depth as you need to actually replace the firmware and find a respective mod that will increase your bitrate (the Driftwood Moon or Cake is a good one for GH2 users). Of course, there are going to be a lot of cameras without any sort of hack, so you can skip this step if it does not apply. (Or pick up a cheap Canon that can use CHDK; camera list available here.)

For my tests I ended up using a hacked GH2 that got me about 80-100Mb/s for my 1080p footage. It came out superb and was a perfect candidate to try for a 4K upscale.

The second step is something anyone should be able to accomplish. After creating your video, but before throwing it into a NLE, download and install the free version of GoPro Studio. You can then increase your video footage's bit-depth from 8-bit to 10-bit. I won't go into the exact steps as I use NeoScene (which was a product that became discontinued when GoPro bought Cineform). They have the same capabilities, except that the GoPro Studio is an editing suite, and NeoScene includes the ability to convert AVCHD files (great for GH2 users like me!). If you want NeoScene for AVCHD footage, you may be able to find some valid serials sold on eBay. I have a posting here with the actual downloads that GoPro's site does not make available for discontinued users.

If you're asking why do the second preparation task, because you know that bit-depth is about color; it's because that this will likely help produce a better 4K version of your 1080p footage since it will have more color information when producing a top-notch 4K video. 8-bit deals with 256 colors, while 10-bit uses 1024 colors. And when doing the upscale process below, the computer is trying to produce non-existent details from details that do exist. Giving it more to work with is better than less in these circumstances.

How to Make 1080p into 2K
The easiest way to go about this is just to enlarge your video until the height is 2048 pixels high. Sure, some quality loss will occur, but not enough to be noticeable to the naked eye. To reiterate, there isn't much of a reason to do this unless you are specifically asked to do so (i.e. a client needs his digital video in 2K).

If this isn't satisfactory, you can use similar steps that will be discussed for 4K.

How to Make 1080p into 4K
Now, you could just scale 1080p footage to the dimensions of 4K, but unlike 2K, the quality will be substandard. Because we are enlarging the footage more than 4 times its original dimensions - where 2K is a slight pulling of the height - the quality will immediately take a hit and be noticeable by enthusiasts and the general public alike.

A simple way, which I do not recommend, is first scaling the 1080p footage to 4K, then use some post filter effects. You can throw on some Gaussian blur to smooth over the banding and artifacts that will undoubtedly arise, and use a bit of sharpen to get back a bit of that grain for realism's sake. This can be useful for quick jobs, but it doesn't give it the video footage quality it deserves.

Instead, you should use Adobe After Effects CC. If you don't have it, try out a trial version. If you use up the trial, either buy it or reinstall your OS and do the trial again (a bit extreme unless you have a spare computer you don't actively use).  

Not long ago Adobe introduced a new "detail-preserving upscale" feature for Adobe After Effects CC. It is an effect and is fairly easy to use. The results are quite phenomenal, even moreso when compared to just a standard upscaling of the footage.

Now that you have the required tool to perform this trick, follow these steps to transform your 1080p footage into 4K:

  1. Open After Effects
  2. Select "New Composition".
  3. Change the values to 4096 x 2160.
  4. Name the composition whatever you like.
  5. Click "OK."
  6. Import the 1080p footage into your "Project" panel. (Drag-and-drop your footage on the Project panel, or right-click the "Project" panel and go to Import > File... to browse for your footage. File > Import > File... also works.)
  7. A "comp" will be created along with the footage import.
  8. Drag-and-drop the footage into your comp timeline.
  9. In the "Effects & Presets" panel, type in "Detail". The "Detail-Preserving Upscale" effect should appear. (If it does not, you are not on the latest version of Adobe After Effects CC.)
  10. Drag-and-drop the effect onto the footage in your comp timeline and it should automatically apply to your footage, and open the "Effects" panel.
  11. Ensure your timeline is at the beginning of your footage by dragging the time cursor to the start of the footage.
  12. Click the "Scale" stopwatch provided underneath the "detail-preserving upscale" effect in the "Effects" panel.
  13. Use the "Scale" option to increase the size of your footage until you fill the entire frame.
  14. Click the "Detail" stopwatch provided underneath the "detail-preserving upscale" effect in the "Effects" panel.
  15. Increase the amount of the "Detail" to 100%.
  16. Click the "Alpha" stopwatch provided underneath the "detail-preserving upscale" effect in the "Effects" panel. (This step and 17 are optional, however, it does seem to look slightly better if done.)
  17. Change the value from "Bicubic" to "Detail-preserving".
  18. Go to Edit > Add to Render Queue
  19. Click it.
  20. The render queue window should appear in place of the comp timeline. Adjust your settings as preferred.
  21. Click "Render".

Note: While 4096 x 2160 is true 4K, many "4K" TV's only support up to 3840 x 2160. So, you may want to check which value you should use depending on your monitor or TV that will be used for viewing.

Note II: The stopwatch is usually not needed unless you plan on changing that selected item's actions during playback. However, I found that not clicking on it for this effect made it ignore the filter altogether when outputted from the render queue. For my tests, it seemed to work if I just did the stopwatch for scaling, but to be safe you may want to do the stopwatch for all the effect properties.

There is no perfect upscaling solution, such as that, this is not a perfect solution. But this is by far the best method I have come across, as well as the least time-consuming. There will never be a perfect solution for upscaling because your are taking something smaller and making it bigger, which will always cause banding and artifacts.

Downscaling
If you don't know much about video and are thinking you should use the above steps to downscale video, don't. It's really just a waste of time. Problems in detail and quality only happen when going from small-to-big, not big-to-small. Decreasing the size of something will retain its detail and not create any of the issues noted above. You can simply scale down the footage in any NLE and be fine. 

However, if you go from big-to-small, then attempt to go back by using small-to-big, the problems above will occur. It would be smarter to keep a backup of the full size as it will be perfect in comparison to trying to match the original with the solution I have presented.

4K You, 4K Me, 4K Everybody!
I have found other software to perform upscaling, but all have mediocre results and cannot come close to the quality retention Adobe After Effects provides. I should say that the larger the gap in dimensions, the less useful any of this becomes. If I try to upscale 480p to 4K, the results should look horrible regardless of what program is used. On the flip side, If trying to upscale from 4K to 6K, this method should still be valid. Because 8K is so massively large, this technique may or may not be valid when coming from a 4K source (certainly not 1080p).

If you need to test your footage, you can always upload it to YouTube where it should give you the option to use 4K when viewing. And if you need to keep the great quality of your 4K video, but need a more manageable file size for sending or uploading, check out "VP9 + Opus = WebM...", which will also have the 4K example I made.

Another thought to remember is that your footage is not 4K, just your outputted video is. For most of us, this method will suit us fine. Others may rather purchase a 4K camera for the real deal. But that doesn't mean to trade-in your 1080p camera, as you may need a dual-setup one day, and this technique can help produce a beautiful 4K video from that second camera that only offers 1080p...