List of True 16:9 Resolutions

2013-01-20
Updated to include 4K UHDTV.

In an effort to enhance the knowledge of the video-making community, I have compiled a list of all true 16:9 video resolutions, including their associated standard when applicable, as well as when the resolution is dividable by 8, which is useful for limited video encoders. The table goes up to 1080p and includes the common resolution for 27 inch 16:9 computer monitors and Super Hi-Vision.

If you’ve ever worked with SD content, you’ll notice that no resolution in here fits the DVD standard. That’s because DVDs were originally made to comply with the NTSC broadcasting resolution, which is a non-square pixel standard using the resolution of 720 by 480 pixels, stretched to accommodate either 4:3 or 16:9 content, never producing a true 16:9 resolution.

Width Height Standard Dividable by 8
16 9
32 18
48 27
64 36
80 45
96 54
112 63
128 72 Yes
144 81
160 90
176 99
192 108
208 117
224 126
240 135
256 144 Yes
272 153
288 162
304 171
320 180
336 189
352 198
368 207
384 216 Yes
400 225
416 234
432 243
448 252
464 261
480 270
496 279
512 288 Yes
528 297
544 306
560 315
576 324
592 333
608 342
624 351
640 360 Yes
656 369
672 378
688 387
704 396
720 405
736 414
752 423
768 432 Yes
784 441
800 450
816 459
832 468
848 477
864 486
880 495
896 504 Yes
912 513
928 522
944 531
960 540
976 549
992 558
1008 567
1024 576 Yes
1040 585
1056 594
1072 603
1088 612
1104 621
1120 630
1136 639
1152 648 Yes
1168 657
1184 666
1200 675
1216 684
1232 693
1248 702
1264 711
1280 720 720p HD Yes
1296 729
1312 738
1328 747
1344 756
1360 765
1376 774
1392 783
1408 792 Yes
1424 801
1440 810
1456 819
1472 828
1488 837
1504 846
1520 855
1536 864 Yes
1552 873
1568 882
1584 891
1600 900
1616 909
1632 918
1648 927
1664 936 Yes
1680 945
1696 954
1712 963
1728 972
1744 981
1760 990
1776 999
1792 1008 Yes
1808 1017
1824 1026
1840 1035
1856 1044
1872 1053
1888 1062
1904 1071
1920 1080 1080p HD Yes
2560 1440 27″ Monitor Yes
3840 2160 4K UHDTV Yes
7680 4320 8K UHDTV
Super Hi-Vision
Yes

Interesting Mathematical Observations

While every simultaneous multiple of 16 by 9 generates a true 16:9 resolution, those dividable by 8 are only generated every simultaneous jump of 128 by 72 pixels, which is the lowest true 16:9 resolution to be dividable by 8.

49 thoughts on “List of True 16:9 Resolutions

  1. wow just what i needed now i can convert my dvds to 16:9 as before i would convert to 640×352 but there would be some black bars at top and bottom of my videos when i play on my 16:9 screen so this has helped so much

  2. Is “854 by 480″ not true 16:9? I’ve come across that as a square pixel variation of the widescreen “720 by 480″ format. I’m just wondering if that would cause any conflict…

  3. (where ~ = repeat last three numbers to infinite)

    The conflict comes from the very fact that it is based on 720 by 480. You can’t get a ratio of 16:9 by keeping 480 lines of pixels. The closest you’ll get is 853.333~ by 480, which is not feasible. You could opt for either 853 or 854 by 480 , but these aren’t true 16:9 resolutions, which is why they aren’t featured here.

    16:9 = 1.77777777~
    853:480 = 1.77708333~
    854:480 = 1.77916666~
    1280:720 = 1.77777777~

    Historically speaking, 720 by 480 is based on the digital conversion of the NTSC broadcast standard by Sony with the D-1 standard. Your conventional 4:3 CRT TV could be said to display non-square pixels in interlaced at a resolution of 720 by 480.

    There’s two things you have to understand about this format:
    1. It never displayed square pixels, so there was never a need to translated 4:3 content to, say, 640 by 480.
    2. It was never meant to display widescreen content, hence the aberration that is the DVD’s letter-boxing of 16:9 content, and the apparition of many approximations such as 854 by 480, or Apple’s all too wrong 848 by 480, none of which, by the way, have ever been standard.

    If you really want to stick to 480p, you could encode all your videos in 720 by 480 and let the software scale it to 16:9, which would at least allow a full-screen view to emulate true 16:9 with the higher pixel count, such as an upscaled 1080p on an HDTV.

  4. Pacoup, thanks for the list I really appreciate it. I posted a youtube video to my website with youtube’s embed feature. The weird thing is that when I chose custom dimensions, I set it to 384, expecting it to be 384×216 (per your list), but it showed 284×225. Even more strange is that it created a iframe with the width 384×195. It looks like youtube doesn’t support smaller 16:9 ratios as it seems that it has been converted to 4:3 with a letter boxing at that size.

    the video is here http://www.sitebuilder.ws/online-sitebuilder

  5. I have a question about the numeric ‘p’ labels. For instance, when one sees “480p”, why is the vertical resoltuion 400 (720/704×400) on 16:9? Yet, “352p” is 624×352 and “576p” is ~854×480? I don’t understand where the labels come from and why some match the vertical resolution and some don’t. I know some has to do with old 4:3 aspect like 640×480 being “480p” Basically, I want to know if, say, I encode video in 1024×576 resoltuion what can I call it in terms of “XXXp”? I know not “576p”, since that’s 854×480. Any help or comments are appreciated.

  6. The problem is you’re referring to standards which refer to old TV formats when widespread video used to be 4:3.

    Traditional 576p is a non-square pixel standard designed for progressive scan SDTV in PAL and SECAM countries. Translated to digital broadcast, two resolutions are widely used, either 720 by 576 or 704 by 576, none of which are 4:3, so the pixels are non-square. This is fundamentally different then how video on the Web works.

    To add to the confusion, SBS and Seven Network in Autralia used to have 1024 by 576 as an HDTV resolution, and they called it 576p as well. They have now migrated to the standard 1080i on ATSC or DVB.

    The reason why 854 by 480 may occur with 576p content is that NTSC, the previous standard used in North America and Japan, only has 480 vertical lines. Therefore, the content needs to be converted to 720 by 480, which translates to 854 by 480 when displayed on a digital square-pixel display such as your computer monitor.

    640 by 480 was the 4:3 translation of standard 480i broadcasts in NTSC which were, as mentioned before, 720 by 480. 640 by 480 is not a standard. There are technically variations to this, such as 720 by 486, or 704 by 480 or 704 by 486, but they have to do with the way analog content was later transfered digitally to analog TVs (pre-HDTV era in NSTC countries) and this is slightly out of scope. You can read more on Wikipedia if you’re interested.

    The 720 or 704 by 400 phenomenon is the principle of applying a 16:9 ratio to 480p content without changing the width of the file, the width being either 720 or 704, as mentioned before. Remember true NSTC 480p, such as what is seen on DVD, is not square pixels. You have to stretch the video to 4:3 or 16:9, so both 720 by 400 and 854 by 480 are the same, they’re just 480p content stretched to different scales. Neither of the resolutions are actually standard, since the NTSC standard itself refers only to analog content. This is why there are so many variations. Every manufacturer pretty much figured out their own solution.

    As for 352p, it’s just an invention of the Web, and it is not true 16:9, so it’s ill-advised to use it.

    If I were you, I would stick to real standards, such as 720p or 1080p, and not try to go with awkward resolutions like 1024 by 576, despite them being true 16:9. 1024 by 576 is a good interim resolution for scaled down 720p or 1080p video on a webpage though, because most users will be able to see the entire video given the prevalence of 1024 by 768+ displays. Doing so also avoids confusion with prior 576p standards, which actually refer to PAL and SECAM content rather than digital HD video.

    I would also try not to use 480p for 16:9 content because it is an aberration created by the transition from analog to digital. My personal favorites for 16:9 content are 360p, 504p, 720p and 1080p, although I would much rather have only HD content.

  7. Scott, the problem with your video is that the YouTube player has a UI which is 26 pixels high. If you set your player’s <iframe> to 384 by 242, your video should appear without a black frame.

  8. I am even more confused; my new monitor is widescreen at 1440 x 900 pixels.
    I have finally found drivers and Bios upgrades for my two older computers and their graphics cards to give this definition option under Windows XP. I can see that it should be 1600 x 900 for true 16:9, so, why do things look right at this resolution or is my brain auto correcting any perceived error?

  9. Roger, 1440 x 900 is a standard 16:10 resolution commonly used on notebooks. It is not 16:9, and that’s normal.

    Your monitor’s resolution is typically independent of the content being shown, even in some HDTVs. The GUI will be normally displayed in the correct aspect ratio, whatever the resolution. Your videos should be displayed correctly, but might be letterboxed because of differing monitor/video aspect ratios (black bars: http://en.wikipedia.org/wiki/File:Image_cropping_235x1.jpg).

    This article is specifically about 16:9 ratios for typical widescreen video encoding. Other videos might not be in 16:9, such as old SDTV shows which use a 4:3 ratio, and cinema pictures which use various ratios such as 2.35:1, which this article does not cover.

  10. 16:9 isn’t a “resolution”, it’s an aspect ratio. 720 by 480 is a display size. 72 or 96 dpi are resolutions.

  11. I don’t know if you meant to correct me or indicate to the previous commenter what 16:9 means, but note that’s not what I meant.

    I wrote: “1440 x 900 is a standard 16:10 resolution”, which, contextually, meant: “1440 x 900 is a standard 16:10 [format] resolution”, that is, “1440 [pixels] [by] 900 [pixels] is a standard [...] resolution”.

    My grammar was correct.

  12. I haven’t started looking for this on your web site yet, but I am trying to get information on 2 areas pertaining to videos.

    1. What is “MUX” rate and what happens when one adjusts this rate?

    2. I have seen videos use anywhere from 256 kbps to 1800kbps. Is there a recommended kbps per resolution to avoid the boxy areas on the video?

    Thanks in advance for your time.

    I have bookmarked your page as I find it very helpful!!

  13. I haven’t specifically written about all of those, or recent enough articles have not been written, but here are some explanations:

    1. MUX is an abbreviation for Multiplexing. In electronics, a Multiplexer is used to combine different signals into a single stream, typically for easier handling. In the case of digital video, multiplexing plays both the role of combining audio and video together in a single time-encoded stream so the player doesn’t fall out of sync and keeping the stream being read by the player as a constant bitrate to provide an easier to handle stream (in theory, but this ultimately depends on the codec and decoder implementation).

    The MUX rate refers to the total bandwidth used to compile all of the elementary streams, such as audio and video, into a single one. For instance, an MPEG-2 video could use a MUX rate of 10.08 Mb/s for a given video, meaning packets from the audio and video streams will be split and combined in such a way that the receiving player gets a 10.08 Mb/s feed.

    While the MUX rate is typically set automatically, in certain scenarios, it’s difficult to know what the optimal usage of the video and audio stream bitrate will be, like when encoding for a DVD, but the maximum MUX rate is known, as it’s directly dependent on the specs of the DVD standard in this case (i.e. the maximum read speed of a DVD disc). The MUX rate can thus be enforced, and the encoder can be told to use whatever bandwidth is available to encode the video.

    Additionally, the size of the packets used to combine data in the Muxed stream can usually be defined. Larger packets will be faster to transfer, but in situations such as weak Internet connections, they will increase the error rate as one dropped packet loses more information of the video than a smaller one would. So, in Web encoding, a smaller packet size may be desired.

    Arbitrarily changing the MUX rate is usually not something you do, unless you’re targeting specific fixed media. Good encoders should define the optimal MUX rate for you automatically, based on the total bandwidth of your video streams.

    2. As for the specific bitrate of any video, the answer depends on the following factors:
    - Codec
    - Encoder
    - Encoder configurations / Encoding profile
    - Resolution
    - Frame rate
    - Color space
    - Chroma format
    - Video complexity
    - Type of bitrate (constant, variable)

    There is no one specific formula for the ultimate bitrate, but in general, experience can give you some guidelines.

    For instance, with H.264 or VP8 (typically contained in “mp4″ and “webm” files), you can obtain good enough quality for more scenarios with 10 Mb/s variable bitrate on a 1080p video, which is 2,073,600 pixels, in 4:1:1 chroma format and a fast encoder setting on Main or High Profile, in Level 4.1. Assuming these settings, you can calculate the required bandwidth based on the total number of pixels. For instance, 720p is 921,600 pixels in surface, meaning the required bitrate is in theory 2.25x less to achieve the same quality, roughly equivalent to 4.4 Mb/s.

    In my personal experience, I have achieved better quality with 2.6 Mb/s on the same video, but the encoding time is as much as octupled because of the massive amount of processing required by the increased quality in the encoder settings, and the video is more difficult to playback (i.e. may not be friendly with older computers or the majority of DSPs — digital signal processing units, like the chip dedicated to decoding H.264 video in your GS2).

    Most encoders also feature quality-based encoding, where instead of encoding by analyzing the video and giving the best bitrate available to the scenes given preset constraints, the encoder assigns a bitrate automatically according to a quality setting. This can yield good quality regardless of the video — the bandwidth will simply pump up if video complexity requires it — and can be much faster to encode due to not needing two or more encoding passes, but it can never achieve the efficiency of the targeted variable bitrate.

    All in all, I’d say that video encoding is a science in and of itself, and depending on your needs, or interests for that matter, you may have to do a bit of studying.

    Here are a few article links to get you started:
    http://en.wikipedia.org/wiki/Multiplexer
    http://en.wikipedia.org/wiki/Bitrate
    http://en.wikipedia.org/wiki/H.264
    http://en.wikipedia.org/wiki/CABAC
    http://en.wikipedia.org/wiki/Chroma_subsampling
    http://en.wikipedia.org/wiki/Y%27CbCr
    http://en.wikipedia.org/wiki/Color_space

  14. I don’t mean to nitpick.. But you should probably leave the “p” and “i” out of your explanations, as this only confuses the issue.

    P and I refer to “progressive” and “interlaced”, and is a hold-over from the days of analog broadcasting. It refers to how a stream is encoded. For example: a typical NTSC broadcast stream is encoded at 720×240@60hz, with each vertical frame of the source stream containing only half of the video information. These frames are then displayed back-to-back @60fps to create the full 480 lines of resolution @30fps.

    It was necessary to do it this way, as the simple electronics of early television equipment relied on the frequency of alternating current (hz) to achieve it’s framerate… 60hz=30fps for NTSC, and 50hz=25fps for PAL. Most converters nowadays only translate interlaced content to progressive, and don’t even give you the option to encode interlaced content. Most offer options on how to “de-interlace”, and that’s about it.

    People who buy TVs rated at 1080i often complain of their being lines across their screen. That’s because they’re only getting half the picture. You get what you pay for! :-)

  15. I’d rather argue the mere fact ATSC broadcasts are only 1080i makes interlaced content still releveant today. I would expect anyone working in the broadcast industry or video encoding field to know this.

    But thanks for the good explanation on where interlacing comes from =D

  16. I am a bit out of my depth in this amazing community I stumbled into while editing photos. I must say though that I actually feel a fraction of my previous IQ score higher than I did when I arrived. I feel the chances are better than “60%” in my favor that I may well be able to hold my own if someone has the knowledge and time and willingness to answer my query (in relative laymen terms). This opinion is based solely on the fact that I read through every post on this page and only once or twice imitated “Thor looking at fire for the first time”.

    I have a streaming video/audio question. I am not certain if mine is in a category that relates to your forum or not but it sure sounds like it should. My question relates to maintaining a stable feed of (often occasions two way) video and audio live streaming.

    I will just put all of the known factors out for your analysis and if anyone is willing and able to give me some answers I will be extremely grateful.

    Using a stand alone (Adobe Flash based) software program provided by a web chat hosting site, I was previously able to maintain a strong stable feed. I was initially using a laptop, with 6GB RAM, 750GB Hard drive, a Pentium I5 mobile processor with a base clock of 2.4Mhz, a Logitech C920 Pro HD web cam (which only offers an aspect ratio of 16:9), a Jabra Bluetooth earpiece with USB dongle, a 50ft CAT5e cable to an AT&T Gateway, and an internet speed touted to offer 21Mbps Download and 1.5Mbps upload speeds respectively. A speed test pinging a server less than 200 miles away would net speeds of Ping 142 as well as speeds 8-11Mbps Down and 1.39-1.41Mbps Up. A different speed test pinging the actual servers I communicate with in Budapest Hungary (geographically “no where” near Ohio USA) would ping at 173 then net speeds of up to 7 Down and 1.39 Up.

    The sites broadcasting aspect ratio is 4:3. I purchased a downgraded Logitech C910 Pro HD cam because it captured in the preferred ratio of the site. In addition, I had Time Warner Cable come and install a single cable connection coming into the wall of my studio room dedicated solely to my computer for these purposes only wired from the wall to the computer with a new CAT6 cable. The actual Hungary speed test is netting (consistently) 21-24 of the promised 30Mbps download and 3.9-4.3 of the promised 5Mbps. Having made those 2 drastic changes, my stream or feed would crash, delay up to 6 minutes from the moment the action was made until the members of the chat room would see it or hear it while I was seeing real time text scrolling on my screen from them.

    Countless utility reports speed tests, ping tests, packet loss accusations that proved to be false, and hours spent with my online support team having me do everything from clearing my IE cache to wearing a tin foil bra *s*. Long story just a tiny bit more bearable then it would otherwise be. I was finally advised that I needed a CPU with a speed of 2.7Ghz.

    The application software supposedly auto detects your systems capabilities and then sets the following parameters, resolution, fps, bitrate, and audio codec. I am a perfectionist, therefore I was only interested in streaming in full HD. At this point however I would have settled for a room that didn’t crash or lag and maintained some semblance of consistent performance.

    I now own a brand new HP Desktop PC with 16GB RAM, 1TB hard drive, and an AMD six core CPU clocked at 3.5 or 3.7Ghz (sorry, anything over the required 2.7Ghz seemed to me like it should be the fix. It actually was the fix; and then, it wasn’t.

    All of these ramblings actually lead to a fairly simple question. My site has now defaulted my (supposed to be auto detected values) at really high settings. Before several updates to the software platform and new ownership with eyes and efforts on an upcoming IPO, I used to be running smoothly in the past with values of resolution 480×360 Audio Codec MP3 vs AAC, fps 17, and bitrate 650. Now no matter what is detected, or what values I may try to manually set, they are automatically changed remotely by the site to a resolution of 800×600, Codec still MP3, Frame rate at a staggering 35fps and a bitrate of1500.

    Finally, my point, my question, I cannot get a straight answer from my support team. Which of these values one, two, or all of them, can cause broadcasting errors, delays or in the case of last night, a ridiculously dark image in the same lighting conditions that earlier caused me to appear to be “irradiated” on Logitech’s quick capture recording application as well as in a google+ hang out.

    I may be in the totally wrong aisle at Best Buy here but you seemed to be tossing around similar terminology and I took a chance that maybe someone could help me figure out what factors have the biggest impact so I can make reasonable and intelligent demands from my support team in regards to their digging in and forcing these broadcast values. I really don’t want to demand that they lower my bitrate if that factor has no actual bearing on causality.

    Thank you to anyone who took the time to read my ramblings. Thank you in advance to anyone who takes my question on or tells me I’m in the wrong department, and thank you very much for the oh so valuable information I have bookmarked regarding standard “True” 16:9 aspect ratios.

    Happy Valentines Day!
    Grace

  17. Hi Grace,

    Thanks for your appreciation and reading of my article. I’d be glad to help you, but there are a few missing pieces in your puzzle, and a few elements which require a proper understanding of what’s happening.

    The first issue is the software in question. It’s difficult for me to troubleshoot any software issues if I don’t know all of the specifics. For all I know, the problem could be the software you’re using, and nothing else. However you did mention issues with other chat software, so I’ll have to elaborate on other possibilities.

    Of course, since I’m not in front of your hardware, I can only do as much as suppose, so you’ll have to do the rest of the analysis yourself. The only thing I can say for sure is that any Intel Core i5 processor (not Pentium, small terminology mistake) is more than enough to do video conferencing and should be able to do HD, assuming sufficient network resources.

    Some of the information you provided are also of questionable importance. For instance, 35 fps is a strange framerate for video, so you might be looking at the wrong statistics.

    So, here are my suppositions of what the problem could be:

    Internet connection
    The first thing which rings an alert is your ping of 142 ms with a server less than 200 miles away. This is extremely abnormal, considering the same test conditions give me 5 ms, a full 30 Mb/s down of my 30 Mb/s advertised, and a full 3 Mb/s up of my 3 Mb/s advertised. But I live in Canada, and comparatively speaking, American telecoms are known for their poor network infrastructure and for providing cheap modems. We have friends who moved to the States, and complaining about their Internet connection has become a regular sport for them. To make matters worse, Canada’s networks aren’t even close to the best in the world, so you can imagine how bad it actually is in the United States. It’s third world Internet, with first world prices.

    There might be nothing you can do about it, or the issue may be entirely in another common culprit of poor network performance: your router.

    Router
    I’ll say it right away, almost all routers in existence aren’t worth a dime. The problem is the following: Every home router manufacturer in existence except for Cisco, and lately, Asus, doesn’t know how to program proper firmwares. Plus, among the manufacturers which are good, only the top tier models truly work well.

    As a rule of thumb, anything too old (e.g. pre-802.11n) is also worthless for today’s demanding networked applications.

    If your router’s worth under 100$ or if it comes from another company than Cisco, or again, _lately_ Asus, it’s probably worthless and the cause of many networking problems which could have already happened or will happen in the near future.

    If you tell me the exact model of your router, I can tell you if it’s the problem you’re having. If it’s a good router, it may also be caused by firmware bugs, which can, most of the time, be resolved easily by updating the router’s firmware. Most home routers are not good out of the box and must be updated immediately to work to their full potential because manufacturers have a habit of shipping products before their ready. This is not unique to routers; almost all smart electronics are like this, due to software developers who have become too used to fix-with-update-after-launch development process, a product of our wonderful Internet age and stupid people combined.

    However, you did say you had some weird issues with Logitech’s Quick Capture webcam software, which I believe is not networked, which points to another potential issue completely unrelated to your network.

    USB connection
    What kind of USB connection are you using for your webcam? If you’re using anything less than USB 2.0, which could be the case depending on the computer’s motherboard and the specific port you’re using, then your poor video performance is most definitely caused by this.

    Video requires lots of raw bandwidth between the camera and the machine which processes (encodes) the images it sends, so if you’re connected on USB 1.1 for instance, there’s not going to be enough bandwidth in that port to support all of the video being sent, resulting in choppy playback of the content received from the webcam.

    However, if you are using a USB 2.0 port (or later versions), then there’s enough bandwidth to handle a good smooth video stream, and the issue is either related to the software itself (e.g. does the same thing happen in Skype than in Google+ Hangouts?), the network or the webcam driver, hence, the next point.

    Drivers and software
    If you’re using the driver that came on the CD which was provided with your webcam, it’s probably the reason for all your pains, for the same reason your router may not work optimally out of the box. You should go to Logitech’s website and download the latest driver for the platform you’re using.

    Also make sure your computer is updated and that all of your devices (video card, sound card, etc.) have the latest drivers.

    The OS you’re using and the software that’s on it may also be the cause for your problem, or part of it. For instance, if you’re still using Windows XP, you should consider updating to at least Windows 7, because Windows XP is a really bad operating system. Nevertheless, the issues you’re higlighting should not occur on a properly running copy of Windows XP, so this is probably not the issue at hand, although crapware which is bundled with most new PCs could be slowing down your computer significantly, even with Windows 8.

    In the same line of thoughts, more or less severe computer problems might be caused by a virus-infected system, or conversely a no-good antivirus. If you do know you’re computer’s okay for this however, you can safely ignore that, but a look at your security solutions doesn’t hurt.

  18. I hope somboby can help me with this frustrating issue. I am trying to creat a DVD of photos, I used MS Powerpoint and set the pace size to 16:9. I set all the photos to fit this size and burnt the DVD using Windows DVD maker, also using the 16:9 resolution there. When i play the DVD on my computer it is fine but when I play it on my widescreen tv it cuts off the sides of the video. Very confused…any sugestions?

  19. What’s going on with the 1366×768 screen resolution? And what’s the difference between 1360×768 and 1366×768??

  20. Hey I’m watching a 720p that I converted on my pc and put on a flash drive to watch on my Xbox. The problem is that my TV only supports 480p and the show seems really dark. My TV has no contrast adjustment either any help would be much appreciated.

  21. @WinXP

    1360×768 and 1366×768 are WXGA resolutions (an imprecise label) that have been common on many notebooks, as well as on many HDTVs in 2006, back then labeled as 720p. It’s also the lowest resolution you can have on Windows 8 (Prior to Blue/8.1) to be able to use the window snap feature. It is not, however, a true 16:9 resolution.

    Case in point, 16/9 equals 1.777…, while 1360/768 equals 1.7708333…, and 1366/768 equals 1.7786458333…

    Hence, you can see that in my list, it’s 1360×765 that’s a true 16:9, but resolutions with odd numbers are seldom used in the industry.

    More info on Wikipedia:
    http://en.wikipedia.org/wiki/Graphics_display_resolution#WXGA_.281280.C3.97768.29

  22. I write for a website and need to watermark images in 640×320 pixels but any app I find, as I do most work on my phone, only gives me ratios and not pixels. What ratio do I use and how do I k ow image is 640×320 when done ??? Thank you

  23. Video – Frame weight-672,
    frame height-288,
    data rate-1715 kbps,
    Total bitrate-2163kbps,
    frame rate-25frames/second?
    Audio- Bite rate-448kbps,
    Channel-2(stereo)
    Audio Sample rate-48KHz

  24. Video – Frame weight-672,
    frame height-288,
    data rate-1715 kbps,
    Total bitrate-2163kbps,
    frame rate-25frames/second?
    Audio- Bite rate-448kbps,
    Channel-2(stereo)
    Audio Sample rate-48KHz
    which format are support file,Exa:Avi,Mp4,3gp,Kmv etc…………………….?????????????????????????

  25. Why isn’t 1600×900 a true 16:9 resolution? Just divide the both numbers and you have 16 and 9.

  26. Thanks for the list and the thorough explanation on the ratios. Need them for youtube videos only but glad you brought up the divisible by 8 list as well.

  27. Good! 136?x768 (? can be 0, 6 or 8) isn’t a true 16:9, but at least, true 16:9 is 1360×765.

  28. I want to clear something up where it pertains to TV video and older “resolutions” because some people may be under the impression that SD analog is based on ### X ### pixels. It just is not.
    So here’s a little explanation and I’ll try to keep it as simple as possible.
    TV [SD analog] comes in several ‘formats’ I’m in the US of A so I’ll talk about NTSC, but Pal and SECAM are similar.
    To keep it simple I will only talk pixels and lines and not about bandwidth, audio, Chroma [color] etc.

    TERMS I NEED TO INTRODUCE:
    SAR [Storage Aspect Ratio]
    PAR [Pixel Aspect Ratio.]
    DAR [Display Aspect Ratio.]
    SAR x PAR = DAR {remember that.}

    Analog video needed to be able to display a picture from a camera so all the cameras and TVs needed to be compatible.

    -Highly technical stuff
    A TV contains a CRT [Cathode Ray Tube] which consists of a beam, coils, and a phosphorous surface or ‘screen.’ The beam emits electrons onto the screen at varying light levels based on the input to the beam emitter. The coils deflect the beam to different areas of the screen.
    A voltage across the horizontal coil deflects the beam left or right. The Vertical coil, up or down.
    The beam moves left to right across the top edge of the screen and is emitting at varying levels creating the first line.
    The beam is turned off as it reaches the right side and is returned to the left side and skips 1 line and the process repeats all the way down the screen filling in the odd numbered lines.
    At the bottom right, the beam is turned off while the beam is retracing to the top left plus one line and the process repeats again filling in the even numbered lines.
    Each pass takes ~1/60th of a second and the whole screen is drawn in ~1/30th of a second.
    This process is called interlacing and is the [i] in 480i, 1080i, etc.
    If that beam was traced line by line, top to bottom, without skipping it would be called progressive scan [p] {720p, 1080p, etc.}
    -
    So NTSC analog consists of lines scanned interlaced at 30 Frames per second [60 half frames.
    The number of horizontal lines is 525. But only ~480 lines are visible. The screen was rectangular at a 4:3 aspect. So a 12" wide screen would be 9" wide [only the visible part.] CRTs were imperfect so the screen might not be exactly that size. The vertical height then is ~480 lines.
    The horizontal ‘lines’ are depicted not as an absolute, but as a function of bandwidth [Which I won't get into.] Bandwidth figures in such that the analog signal is such that a certain number of dots can be discerned on a line. The old way of stating this was horizontal resolution. That spec was further confusing because they wanted to talk about a square screen so they would only look at the center 3/4ths of the screen [3:3]. A ½” VHS tape had a spec of 230 ‘lines’ which meant that on playback the center 3/4ths of the width of the screen made it possible to discern 230 dots on any horizontal line. U-matic 3/4″ tape had a spec of ~340 ‘lines’
    So, to say TV was 480i is an over simplification.

    Digital displays [LCD, LED, etc.] use actual pixels or points so the ‘native resolution is the actual number of points on a screen.
    Digital SDTV is given a spec of 640×480 which approximates VGA video which is natively SD480p, not to be confused with HD480p
    Video is still either interlaced or not [progressive] and hence we get [i] or [p].

    As if that wasn’t enough, they introduced non-square [ANAMORPHIC] pixels which allows a non ‘native’ 16:9 video to LOOK 16:9 EG: 108i is NOT 1920x1080x60p [SAR] and 1:1 PAR!
    It is 1440x1080x30i [or 60i, or 24i...] at 1.33333:1 PAR. So 1440×1.3333:1080×1 =~1920:1080 so the DAR is ~16:9 and it looks right on the screen.

    So, if you want square pixels, stick with 16:9 SAR [p] video and 1:1 PAR

    But in an imperfect world we have [i] video so beware.

    Also don’t confuse a DAR with PAR

    DAR is what the displayed video looks like. [4:3, 16:9, etc.]{Or 1.3, 1.77:1, etc.}
    PAR is often 1:1, 1.3333:1 and sometimes 1.5:1, 1.66:1, or even 2:1 and denotes square vs. anamorphic pixels

    Thanks

  29. Here’s a nugget.
    Have you ever wondered where they got 16:9 as a standard? I note that 4:3 is the old standard and 4 squared = 16 and 3 squared = 9
    So 4*4/3*3 is the same as 16/9. Weird?

  30. Hi pacoup, Your answer to winxp made clear my doubts. I’m producing screen-cast video for first time.

    My screen resolution is 1366/768, when i produced video it looks crisp clear in my screen. However i don’t know how it would look in bigger screen resolutions. Would there be any degradation when video is viewed in different(higher than 1366/768) screen sizes?

    Do i need to record in 1360/768 for less degradation of pixels when viewed in smaller or bigger screen sizes.

    Your article is extremely helpful. Thanks for your effort.

  31. When screencasting, your best bet is to record in the resolution which your client is going to see, to avoid dekstop UI elements being stretched, etc.

    However, nowadays, especially with high pixel density displays, it’s not necessarily a feasible thing.

    The only thing you really want to ensure is that people will be able to see what’s on the screen. So if your target is YouTube, maybe set your recording resolution to 720p, or 1080p with a boosted font size if possible.

    It’s a difficult question to answer without testing the result.

Leave a Reply

Your email address will not be published. Required fields are marked *

− 1 = 8

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>