List of True 16:9 Resolutions

Updated to include full range of resolutions up to 8K UHDTV.

In an effort to enhance the knowledge of the video-making community, I have compiled a list of all true 16:9 video resolutions, including their associated standard when applicable, as well as when the resolution is divisible by 8, which is useful for limited video encoders. The table goes up to 1080p and includes common resolutions like that of a typical 27 inch 16:9 computer monitor and Super Hi-Vision.

Note: If you’ve ever worked with SD content, you’ll notice that no resolution here fits the DVD standard. That’s because DVDs were originally made to comply with the NTSC broadcasting resolution, which is a non-square pixel standard using the resolution of 720 by 480 pixels, stretched to accommodate either 4:3 or 16:9 content, never producing a true 16:9 resolution.

Width Height Standard Divisible by 8
16 9
32 18
48 27
64 36
80 45
96 54
112 63
128 72 Yes
144 81
160 90
176 99
192 108
208 117
224 126
240 135
256 144 Yes
272 153
288 162
304 171
320 180
336 189
352 198
368 207
384 216 Yes
400 225
416 234
432 243
448 252
464 261
480 270
496 279
512 288 Yes
528 297
544 306
560 315
576 324
592 333
608 342
624 351
640 360 Yes
656 369
672 378
688 387
704 396
720 405
736 414
752 423
768 432 Yes
784 441
800 450
816 459
832 468
848 477
864 486
880 495
896 504 Yes
912 513
928 522
944 531
960 540
976 549
992 558
1008 567
1024 576 Yes
1040 585
1056 594
1072 603
1088 612
1104 621
1120 630
1136 639
1152 648 Yes
1168 657
1184 666
1200 675
1216 684
1232 693
1248 702
1264 711
1280 720 720p HD Yes
1296 729
1312 738
1328 747
1344 756
1360 765
1376 774
1392 783
1408 792 Yes
1424 801
1440 810
1456 819
1472 828
1488 837
1504 846
1520 855
1536 864 Yes
1552 873
1568 882
1584 891
1600 900
1616 909
1632 918
1648 927
1664 936 Yes
1680 945
1696 954
1712 963
1728 972
1744 981
1760 990
1776 999
1792 1008 Yes
1808 1017
1824 1026
1840 1035
1856 1044
1872 1053
1888 1062
1904 1071
1920 1080 1080p HD Yes
1936 1089
1952 1098
1968 1107
1984 1116
2000 1125
2016 1134
2032 1143
2048 1152 Yes
2064 1161
2080 1170
2096 1179
2112 1188
2128 1197
2144 1206
2160 1215
2176 1224 Yes
2192 1233
2208 1242
2224 1251
2240 1260
2256 1269
2272 1278
2288 1287
2304 1296 Yes
2320 1305
2336 1314
2352 1323
2368 1332
2384 1341
2400 1350
2416 1359
2432 1368 Yes
2448 1377
2464 1386
2480 1395
2496 1404
2512 1413
2528 1422
2544 1431
2560 1440 27″ Monitor Yes
2576 1449
2592 1458
2608 1467
2624 1476
2640 1485
2656 1494
2672 1503
2688 1512 Yes
2704 1521
2720 1530
2736 1539
2752 1548
2768 1557
2784 1566
2800 1575
2816 1584 Yes
2832 1593
2848 1602
2864 1611
2880 1620
2896 1629
2912 1638
2928 1647
2944 1656 Yes
2960 1665
2976 1674
2992 1683
3008 1692
3024 1701
3040 1710
3056 1719
3072 1728 Yes
3088 1737
3104 1746
3120 1755
3136 1764
3152 1773
3168 1782
3184 1791
3200 1800 Yes
3216 1809
3232 1818
3248 1827
3264 1836
3280 1845
3296 1854
3312 1863
3328 1872 Yes
3344 1881
3360 1890
3376 1899
3392 1908
3408 1917
3424 1926
3440 1935
3456 1944 Yes
3472 1953
3488 1962
3504 1971
3520 1980
3536 1989
3552 1998
3568 2007
3584 2016 Yes
3600 2025
3616 2034
3632 2043
3648 2052
3664 2061
3680 2070
3696 2079
3712 2088 Yes
3728 2097
3744 2106
3760 2115
3776 2124
3792 2133
3808 2142
3824 2151
3840 2160 4K UHDTV Yes
3856 2169
3872 2178
3888 2187
3904 2196
3920 2205
3936 2214
3952 2223
3968 2232 Yes
3984 2241
4000 2250
4016 2259
4032 2268
4048 2277
4064 2286
4080 2295
4096 2304 Yes
4112 2313
4128 2322
4144 2331
4160 2340
4176 2349
4192 2358
4208 2367
4224 2376 Yes
4240 2385
4256 2394
4272 2403
4288 2412
4304 2421
4320 2430
4336 2439
4352 2448 Yes
4368 2457
4384 2466
4400 2475
4416 2484
4432 2493
4448 2502
4464 2511
4480 2520 Yes
4496 2529
4512 2538
4528 2547
4544 2556
4560 2565
4576 2574
4592 2583
4608 2592 Yes
4624 2601
4640 2610
4656 2619
4672 2628
4688 2637
4704 2646
4720 2655
4736 2664 Yes
4752 2673
4768 2682
4784 2691
4800 2700
4816 2709
4832 2718
4848 2727
4864 2736 Yes
4880 2745
4896 2754
4912 2763
4928 2772
4944 2781
4960 2790
4976 2799
4992 2808 Yes
5008 2817
5024 2826
5040 2835
5056 2844
5072 2853
5088 2862
5104 2871
5120 2880 Retina 5K Yes
5136 2889
5152 2898
5168 2907
5184 2916
5200 2925
5216 2934
5232 2943
5248 2952 Yes
5264 2961
5280 2970
5296 2979
5312 2988
5328 2997
5344 3006
5360 3015
5376 3024 Yes
5392 3033
5408 3042
5424 3051
5440 3060
5456 3069
5472 3078
5488 3087
5504 3096 Yes
5520 3105
5536 3114
5552 3123
5568 3132
5584 3141
5600 3150
5616 3159
5632 3168 Yes
5648 3177
5664 3186
5680 3195
5696 3204
5712 3213
5728 3222
5744 3231
5760 3240 Yes
5776 3249
5792 3258
5808 3267
5824 3276
5840 3285
5856 3294
5872 3303
5888 3312 Yes
5904 3321
5920 3330
5936 3339
5952 3348
5968 3357
5984 3366
6000 3375
6016 3384 Yes
6032 3393
6048 3402
6064 3411
6080 3420
6096 3429
6112 3438
6128 3447
6144 3456 Yes
6160 3465
6176 3474
6192 3483
6208 3492
6224 3501
6240 3510
6256 3519
6272 3528 Yes
6288 3537
6304 3546
6320 3555
6336 3564
6352 3573
6368 3582
6384 3591
6400 3600 Yes
6416 3609
6432 3618
6448 3627
6464 3636
6480 3645
6496 3654
6512 3663
6528 3672 Yes
6544 3681
6560 3690
6576 3699
6592 3708
6608 3717
6624 3726
6640 3735
6656 3744 Yes
6672 3753
6688 3762
6704 3771
6720 3780
6736 3789
6752 3798
6768 3807
6784 3816 Yes
6800 3825
6816 3834
6832 3843
6848 3852
6864 3861
6880 3870
6896 3879
6912 3888 Yes
6928 3897
6944 3906
6960 3915
6976 3924
6992 3933
7008 3942
7024 3951
7040 3960 Yes
7056 3969
7072 3978
7088 3987
7104 3996
7120 4005
7136 4014
7152 4023
7168 4032 Yes
7184 4041
7200 4050
7216 4059
7232 4068
7248 4077
7264 4086
7280 4095
7296 4104 Yes
7312 4113
7328 4122
7344 4131
7360 4140
7376 4149
7392 4158
7408 4167
7424 4176 Yes
7440 4185
7456 4194
7472 4203
7488 4212
7504 4221
7520 4230
7536 4239
7552 4248 Yes
7568 4257
7584 4266
7600 4275
7616 4284
7632 4293
7648 4302
7664 4311
7680 4320 8K UHDTV
Super Hi-Vision

89 thoughts on “List of True 16:9 Resolutions”

  1. wow just what i needed now i can convert my dvds to 16:9 as before i would convert to 640×352 but there would be some black bars at top and bottom of my videos when i play on my 16:9 screen so this has helped so much

  2. Is “854 by 480” not true 16:9? I’ve come across that as a square pixel variation of the widescreen “720 by 480” format. I’m just wondering if that would cause any conflict…

  3. (where ~ = repeat last three numbers to infinite)

    The conflict comes from the very fact that it is based on 720 by 480. You can’t get a ratio of 16:9 by keeping 480 lines of pixels. The closest you’ll get is 853.333~ by 480, which is not feasible. You could opt for either 853 or 854 by 480 , but these aren’t true 16:9 resolutions, which is why they aren’t featured here.

    16:9 = 1.77777777~
    853:480 = 1.77708333~
    854:480 = 1.77916666~
    1280:720 = 1.77777777~

    Historically speaking, 720 by 480 is based on the digital conversion of the NTSC broadcast standard by Sony with the D-1 standard. Your conventional 4:3 CRT TV could be said to display non-square pixels in interlaced at a resolution of 720 by 480.

    There’s two things you have to understand about this format:
    1. It never displayed square pixels, so there was never a need to translated 4:3 content to, say, 640 by 480.
    2. It was never meant to display widescreen content, hence the aberration that is the DVD’s letter-boxing of 16:9 content, and the apparition of many approximations such as 854 by 480, or Apple’s all too wrong 848 by 480, none of which, by the way, have ever been standard.

    If you really want to stick to 480p, you could encode all your videos in 720 by 480 and let the software scale it to 16:9, which would at least allow a full-screen view to emulate true 16:9 with the higher pixel count, such as an upscaled 1080p on an HDTV.

  4. Pacoup, thanks for the list I really appreciate it. I posted a youtube video to my website with youtube’s embed feature. The weird thing is that when I chose custom dimensions, I set it to 384, expecting it to be 384×216 (per your list), but it showed 284×225. Even more strange is that it created a iframe with the width 384×195. It looks like youtube doesn’t support smaller 16:9 ratios as it seems that it has been converted to 4:3 with a letter boxing at that size.

    the video is here

  5. I have a question about the numeric ‘p’ labels. For instance, when one sees “480p”, why is the vertical resoltuion 400 (720/704×400) on 16:9? Yet, “352p” is 624×352 and “576p” is ~854×480? I don’t understand where the labels come from and why some match the vertical resolution and some don’t. I know some has to do with old 4:3 aspect like 640×480 being “480p” Basically, I want to know if, say, I encode video in 1024×576 resoltuion what can I call it in terms of “XXXp”? I know not “576p”, since that’s 854×480. Any help or comments are appreciated.

  6. The problem is you’re referring to standards which refer to old TV formats when widespread video used to be 4:3.

    Traditional 576p is a non-square pixel standard designed for progressive scan SDTV in PAL and SECAM countries. Translated to digital broadcast, two resolutions are widely used, either 720 by 576 or 704 by 576, none of which are 4:3, so the pixels are non-square. This is fundamentally different then how video on the Web works.

    To add to the confusion, SBS and Seven Network in Autralia used to have 1024 by 576 as an HDTV resolution, and they called it 576p as well. They have now migrated to the standard 1080i on ATSC or DVB.

    The reason why 854 by 480 may occur with 576p content is that NTSC, the previous standard used in North America and Japan, only has 480 vertical lines. Therefore, the content needs to be converted to 720 by 480, which translates to 854 by 480 when displayed on a digital square-pixel display such as your computer monitor.

    640 by 480 was the 4:3 translation of standard 480i broadcasts in NTSC which were, as mentioned before, 720 by 480. 640 by 480 is not a standard. There are technically variations to this, such as 720 by 486, or 704 by 480 or 704 by 486, but they have to do with the way analog content was later transfered digitally to analog TVs (pre-HDTV era in NSTC countries) and this is slightly out of scope. You can read more on Wikipedia if you’re interested.

    The 720 or 704 by 400 phenomenon is the principle of applying a 16:9 ratio to 480p content without changing the width of the file, the width being either 720 or 704, as mentioned before. Remember true NSTC 480p, such as what is seen on DVD, is not square pixels. You have to stretch the video to 4:3 or 16:9, so both 720 by 400 and 854 by 480 are the same, they’re just 480p content stretched to different scales. Neither of the resolutions are actually standard, since the NTSC standard itself refers only to analog content. This is why there are so many variations. Every manufacturer pretty much figured out their own solution.

    As for 352p, it’s just an invention of the Web, and it is not true 16:9, so it’s ill-advised to use it.

    If I were you, I would stick to real standards, such as 720p or 1080p, and not try to go with awkward resolutions like 1024 by 576, despite them being true 16:9. 1024 by 576 is a good interim resolution for scaled down 720p or 1080p video on a webpage though, because most users will be able to see the entire video given the prevalence of 1024 by 768+ displays. Doing so also avoids confusion with prior 576p standards, which actually refer to PAL and SECAM content rather than digital HD video.

    I would also try not to use 480p for 16:9 content because it is an aberration created by the transition from analog to digital. My personal favorites for 16:9 content are 360p, 504p, 720p and 1080p, although I would much rather have only HD content.

  7. Scott, the problem with your video is that the YouTube player has a UI which is 26 pixels high. If you set your player’s <iframe> to 384 by 242, your video should appear without a black frame.

  8. I am even more confused; my new monitor is widescreen at 1440 x 900 pixels.
    I have finally found drivers and Bios upgrades for my two older computers and their graphics cards to give this definition option under Windows XP. I can see that it should be 1600 x 900 for true 16:9, so, why do things look right at this resolution or is my brain auto correcting any perceived error?

  9. Roger, 1440 x 900 is a standard 16:10 resolution commonly used on notebooks. It is not 16:9, and that’s normal.

    Your monitor’s resolution is typically independent of the content being shown, even in some HDTVs. The GUI will be normally displayed in the correct aspect ratio, whatever the resolution. Your videos should be displayed correctly, but might be letterboxed because of differing monitor/video aspect ratios (black bars:

    This article is specifically about 16:9 ratios for typical widescreen video encoding. Other videos might not be in 16:9, such as old SDTV shows which use a 4:3 ratio, and cinema pictures which use various ratios such as 2.35:1, which this article does not cover.

  10. 16:9 isn’t a “resolution”, it’s an aspect ratio. 720 by 480 is a display size. 72 or 96 dpi are resolutions.

  11. I don’t know if you meant to correct me or indicate to the previous commenter what 16:9 means, but note that’s not what I meant.

    I wrote: “1440 x 900 is a standard 16:10 resolution”, which, contextually, meant: “1440 x 900 is a standard 16:10 [format] resolution”, that is, “1440 [pixels] [by] 900 [pixels] is a standard […] resolution”.

    My grammar was correct.

  12. I haven’t started looking for this on your web site yet, but I am trying to get information on 2 areas pertaining to videos.

    1. What is “MUX” rate and what happens when one adjusts this rate?

    2. I have seen videos use anywhere from 256 kbps to 1800kbps. Is there a recommended kbps per resolution to avoid the boxy areas on the video?

    Thanks in advance for your time.

    I have bookmarked your page as I find it very helpful!!

  13. I haven’t specifically written about all of those, or recent enough articles have not been written, but here are some explanations:

    1. MUX is an abbreviation for Multiplexing. In electronics, a Multiplexer is used to combine different signals into a single stream, typically for easier handling. In the case of digital video, multiplexing plays both the role of combining audio and video together in a single time-encoded stream so the player doesn’t fall out of sync and keeping the stream being read by the player as a constant bitrate to provide an easier to handle stream (in theory, but this ultimately depends on the codec and decoder implementation).

    The MUX rate refers to the total bandwidth used to compile all of the elementary streams, such as audio and video, into a single one. For instance, an MPEG-2 video could use a MUX rate of 10.08 Mb/s for a given video, meaning packets from the audio and video streams will be split and combined in such a way that the receiving player gets a 10.08 Mb/s feed.

    While the MUX rate is typically set automatically, in certain scenarios, it’s difficult to know what the optimal usage of the video and audio stream bitrate will be, like when encoding for a DVD, but the maximum MUX rate is known, as it’s directly dependent on the specs of the DVD standard in this case (i.e. the maximum read speed of a DVD disc). The MUX rate can thus be enforced, and the encoder can be told to use whatever bandwidth is available to encode the video.

    Additionally, the size of the packets used to combine data in the Muxed stream can usually be defined. Larger packets will be faster to transfer, but in situations such as weak Internet connections, they will increase the error rate as one dropped packet loses more information of the video than a smaller one would. So, in Web encoding, a smaller packet size may be desired.

    Arbitrarily changing the MUX rate is usually not something you do, unless you’re targeting specific fixed media. Good encoders should define the optimal MUX rate for you automatically, based on the total bandwidth of your video streams.

    2. As for the specific bitrate of any video, the answer depends on the following factors:
    – Codec
    – Encoder
    – Encoder configurations / Encoding profile
    – Resolution
    – Frame rate
    – Color space
    – Chroma format
    – Video complexity
    – Type of bitrate (constant, variable)

    There is no one specific formula for the ultimate bitrate, but in general, experience can give you some guidelines.

    For instance, with H.264 or VP8 (typically contained in “mp4” and “webm” files), you can obtain good enough quality for more scenarios with 10 Mb/s variable bitrate on a 1080p video, which is 2,073,600 pixels, in 4:1:1 chroma format and a fast encoder setting on Main or High Profile, in Level 4.1. Assuming these settings, you can calculate the required bandwidth based on the total number of pixels. For instance, 720p is 921,600 pixels in surface, meaning the required bitrate is in theory 2.25x less to achieve the same quality, roughly equivalent to 4.4 Mb/s.

    In my personal experience, I have achieved better quality with 2.6 Mb/s on the same video, but the encoding time is as much as octupled because of the massive amount of processing required by the increased quality in the encoder settings, and the video is more difficult to playback (i.e. may not be friendly with older computers or the majority of DSPs — digital signal processing units, like the chip dedicated to decoding H.264 video in your GS2).

    Most encoders also feature quality-based encoding, where instead of encoding by analyzing the video and giving the best bitrate available to the scenes given preset constraints, the encoder assigns a bitrate automatically according to a quality setting. This can yield good quality regardless of the video — the bandwidth will simply pump up if video complexity requires it — and can be much faster to encode due to not needing two or more encoding passes, but it can never achieve the efficiency of the targeted variable bitrate.

    All in all, I’d say that video encoding is a science in and of itself, and depending on your needs, or interests for that matter, you may have to do a bit of studying.

    Here are a few article links to get you started:

  14. I don’t mean to nitpick.. But you should probably leave the “p” and “i” out of your explanations, as this only confuses the issue.

    P and I refer to “progressive” and “interlaced”, and is a hold-over from the days of analog broadcasting. It refers to how a stream is encoded. For example: a typical NTSC broadcast stream is encoded at 720×240@60hz, with each vertical frame of the source stream containing only half of the video information. These frames are then displayed back-to-back @60fps to create the full 480 lines of resolution @30fps.

    It was necessary to do it this way, as the simple electronics of early television equipment relied on the frequency of alternating current (hz) to achieve it’s framerate… 60hz=30fps for NTSC, and 50hz=25fps for PAL. Most converters nowadays only translate interlaced content to progressive, and don’t even give you the option to encode interlaced content. Most offer options on how to “de-interlace”, and that’s about it.

    People who buy TVs rated at 1080i often complain of their being lines across their screen. That’s because they’re only getting half the picture. You get what you pay for! 🙂

  15. I’d rather argue the mere fact ATSC broadcasts are only 1080i makes interlaced content still releveant today. I would expect anyone working in the broadcast industry or video encoding field to know this.

    But thanks for the good explanation on where interlacing comes from =D

  16. I am a bit out of my depth in this amazing community I stumbled into while editing photos. I must say though that I actually feel a fraction of my previous IQ score higher than I did when I arrived. I feel the chances are better than “60%” in my favor that I may well be able to hold my own if someone has the knowledge and time and willingness to answer my query (in relative laymen terms). This opinion is based solely on the fact that I read through every post on this page and only once or twice imitated “Thor looking at fire for the first time”.

    I have a streaming video/audio question. I am not certain if mine is in a category that relates to your forum or not but it sure sounds like it should. My question relates to maintaining a stable feed of (often occasions two way) video and audio live streaming.

    I will just put all of the known factors out for your analysis and if anyone is willing and able to give me some answers I will be extremely grateful.

    Using a stand alone (Adobe Flash based) software program provided by a web chat hosting site, I was previously able to maintain a strong stable feed. I was initially using a laptop, with 6GB RAM, 750GB Hard drive, a Pentium I5 mobile processor with a base clock of 2.4Mhz, a Logitech C920 Pro HD web cam (which only offers an aspect ratio of 16:9), a Jabra Bluetooth earpiece with USB dongle, a 50ft CAT5e cable to an AT&T Gateway, and an internet speed touted to offer 21Mbps Download and 1.5Mbps upload speeds respectively. A speed test pinging a server less than 200 miles away would net speeds of Ping 142 as well as speeds 8-11Mbps Down and 1.39-1.41Mbps Up. A different speed test pinging the actual servers I communicate with in Budapest Hungary (geographically “no where” near Ohio USA) would ping at 173 then net speeds of up to 7 Down and 1.39 Up.

    The sites broadcasting aspect ratio is 4:3. I purchased a downgraded Logitech C910 Pro HD cam because it captured in the preferred ratio of the site. In addition, I had Time Warner Cable come and install a single cable connection coming into the wall of my studio room dedicated solely to my computer for these purposes only wired from the wall to the computer with a new CAT6 cable. The actual Hungary speed test is netting (consistently) 21-24 of the promised 30Mbps download and 3.9-4.3 of the promised 5Mbps. Having made those 2 drastic changes, my stream or feed would crash, delay up to 6 minutes from the moment the action was made until the members of the chat room would see it or hear it while I was seeing real time text scrolling on my screen from them.

    Countless utility reports speed tests, ping tests, packet loss accusations that proved to be false, and hours spent with my online support team having me do everything from clearing my IE cache to wearing a tin foil bra *s*. Long story just a tiny bit more bearable then it would otherwise be. I was finally advised that I needed a CPU with a speed of 2.7Ghz.

    The application software supposedly auto detects your systems capabilities and then sets the following parameters, resolution, fps, bitrate, and audio codec. I am a perfectionist, therefore I was only interested in streaming in full HD. At this point however I would have settled for a room that didn’t crash or lag and maintained some semblance of consistent performance.

    I now own a brand new HP Desktop PC with 16GB RAM, 1TB hard drive, and an AMD six core CPU clocked at 3.5 or 3.7Ghz (sorry, anything over the required 2.7Ghz seemed to me like it should be the fix. It actually was the fix; and then, it wasn’t.

    All of these ramblings actually lead to a fairly simple question. My site has now defaulted my (supposed to be auto detected values) at really high settings. Before several updates to the software platform and new ownership with eyes and efforts on an upcoming IPO, I used to be running smoothly in the past with values of resolution 480×360 Audio Codec MP3 vs AAC, fps 17, and bitrate 650. Now no matter what is detected, or what values I may try to manually set, they are automatically changed remotely by the site to a resolution of 800×600, Codec still MP3, Frame rate at a staggering 35fps and a bitrate of1500.

    Finally, my point, my question, I cannot get a straight answer from my support team. Which of these values one, two, or all of them, can cause broadcasting errors, delays or in the case of last night, a ridiculously dark image in the same lighting conditions that earlier caused me to appear to be “irradiated” on Logitech’s quick capture recording application as well as in a google+ hang out.

    I may be in the totally wrong aisle at Best Buy here but you seemed to be tossing around similar terminology and I took a chance that maybe someone could help me figure out what factors have the biggest impact so I can make reasonable and intelligent demands from my support team in regards to their digging in and forcing these broadcast values. I really don’t want to demand that they lower my bitrate if that factor has no actual bearing on causality.

    Thank you to anyone who took the time to read my ramblings. Thank you in advance to anyone who takes my question on or tells me I’m in the wrong department, and thank you very much for the oh so valuable information I have bookmarked regarding standard “True” 16:9 aspect ratios.

    Happy Valentines Day!

  17. Hi Grace,

    Thanks for your appreciation and reading of my article. I’d be glad to help you, but there are a few missing pieces in your puzzle, and a few elements which require a proper understanding of what’s happening.

    The first issue is the software in question. It’s difficult for me to troubleshoot any software issues if I don’t know all of the specifics. For all I know, the problem could be the software you’re using, and nothing else. However you did mention issues with other chat software, so I’ll have to elaborate on other possibilities.

    Of course, since I’m not in front of your hardware, I can only do as much as suppose, so you’ll have to do the rest of the analysis yourself. The only thing I can say for sure is that any Intel Core i5 processor (not Pentium, small terminology mistake) is more than enough to do video conferencing and should be able to do HD, assuming sufficient network resources.

    Some of the information you provided are also of questionable importance. For instance, 35 fps is a strange framerate for video, so you might be looking at the wrong statistics.

    So, here are my suppositions of what the problem could be:

    Internet connection
    The first thing which rings an alert is your ping of 142 ms with a server less than 200 miles away. This is extremely abnormal, considering the same test conditions give me 5 ms, a full 30 Mb/s down of my 30 Mb/s advertised, and a full 3 Mb/s up of my 3 Mb/s advertised. But I live in Canada, and comparatively speaking, American telecoms are known for their poor network infrastructure and for providing cheap modems. We have friends who moved to the States, and complaining about their Internet connection has become a regular sport for them. To make matters worse, Canada’s networks aren’t even close to the best in the world, so you can imagine how bad it actually is in the United States. It’s third world Internet, with first world prices.

    There might be nothing you can do about it, or the issue may be entirely in another common culprit of poor network performance: your router.

    I’ll say it right away, almost all routers in existence aren’t worth a dime. The problem is the following: Every home router manufacturer in existence except for Cisco, and lately, Asus, doesn’t know how to program proper firmwares. Plus, among the manufacturers which are good, only the top tier models truly work well.

    As a rule of thumb, anything too old (e.g. pre-802.11n) is also worthless for today’s demanding networked applications.

    If your router’s worth under 100$ or if it comes from another company than Cisco, or again, _lately_ Asus, it’s probably worthless and the cause of many networking problems which could have already happened or will happen in the near future.

    If you tell me the exact model of your router, I can tell you if it’s the problem you’re having. If it’s a good router, it may also be caused by firmware bugs, which can, most of the time, be resolved easily by updating the router’s firmware. Most home routers are not good out of the box and must be updated immediately to work to their full potential because manufacturers have a habit of shipping products before their ready. This is not unique to routers; almost all smart electronics are like this, due to software developers who have become too used to fix-with-update-after-launch development process, a product of our wonderful Internet age and stupid people combined.

    However, you did say you had some weird issues with Logitech’s Quick Capture webcam software, which I believe is not networked, which points to another potential issue completely unrelated to your network.

    USB connection
    What kind of USB connection are you using for your webcam? If you’re using anything less than USB 2.0, which could be the case depending on the computer’s motherboard and the specific port you’re using, then your poor video performance is most definitely caused by this.

    Video requires lots of raw bandwidth between the camera and the machine which processes (encodes) the images it sends, so if you’re connected on USB 1.1 for instance, there’s not going to be enough bandwidth in that port to support all of the video being sent, resulting in choppy playback of the content received from the webcam.

    However, if you are using a USB 2.0 port (or later versions), then there’s enough bandwidth to handle a good smooth video stream, and the issue is either related to the software itself (e.g. does the same thing happen in Skype than in Google+ Hangouts?), the network or the webcam driver, hence, the next point.

    Drivers and software
    If you’re using the driver that came on the CD which was provided with your webcam, it’s probably the reason for all your pains, for the same reason your router may not work optimally out of the box. You should go to Logitech’s website and download the latest driver for the platform you’re using.

    Also make sure your computer is updated and that all of your devices (video card, sound card, etc.) have the latest drivers.

    The OS you’re using and the software that’s on it may also be the cause for your problem, or part of it. For instance, if you’re still using Windows XP, you should consider updating to at least Windows 7, because Windows XP is a really bad operating system. Nevertheless, the issues you’re higlighting should not occur on a properly running copy of Windows XP, so this is probably not the issue at hand, although crapware which is bundled with most new PCs could be slowing down your computer significantly, even with Windows 8.

    In the same line of thoughts, more or less severe computer problems might be caused by a virus-infected system, or conversely a no-good antivirus. If you do know you’re computer’s okay for this however, you can safely ignore that, but a look at your security solutions doesn’t hurt.

  18. I hope somboby can help me with this frustrating issue. I am trying to creat a DVD of photos, I used MS Powerpoint and set the pace size to 16:9. I set all the photos to fit this size and burnt the DVD using Windows DVD maker, also using the 16:9 resolution there. When i play the DVD on my computer it is fine but when I play it on my widescreen tv it cuts off the sides of the video. Very confused…any sugestions?

  19. What’s going on with the 1366×768 screen resolution? And what’s the difference between 1360×768 and 1366×768??

  20. Hey I’m watching a 720p that I converted on my pc and put on a flash drive to watch on my Xbox. The problem is that my TV only supports 480p and the show seems really dark. My TV has no contrast adjustment either any help would be much appreciated.

  21. @WinXP

    1360×768 and 1366×768 are WXGA resolutions (an imprecise label) that have been common on many notebooks, as well as on many HDTVs in 2006, back then labeled as 720p. It’s also the lowest resolution you can have on Windows 8 (Prior to Blue/8.1) to be able to use the window snap feature. It is not, however, a true 16:9 resolution.

    Case in point, 16/9 equals 1.777…, while 1360/768 equals 1.7708333…, and 1366/768 equals 1.7786458333…

    Hence, you can see that in my list, it’s 1360×765 that’s a true 16:9, but resolutions with odd numbers are seldom used in the industry.

    More info on Wikipedia:

  22. I write for a website and need to watermark images in 640×320 pixels but any app I find, as I do most work on my phone, only gives me ratios and not pixels. What ratio do I use and how do I k ow image is 640×320 when done ??? Thank you

  23. Video – Frame weight-672,
    frame height-288,
    data rate-1715 kbps,
    Total bitrate-2163kbps,
    frame rate-25frames/second?
    Audio- Bite rate-448kbps,
    Audio Sample rate-48KHz

  24. Video – Frame weight-672,
    frame height-288,
    data rate-1715 kbps,
    Total bitrate-2163kbps,
    frame rate-25frames/second?
    Audio- Bite rate-448kbps,
    Audio Sample rate-48KHz
    which format are support file,Exa:Avi,Mp4,3gp,Kmv etc…………………….?????????????????????????

  25. Why isn’t 1600×900 a true 16:9 resolution? Just divide the both numbers and you have 16 and 9.

  26. Thanks for the list and the thorough explanation on the ratios. Need them for youtube videos only but glad you brought up the divisible by 8 list as well.

  27. Good! 136?x768 (? can be 0, 6 or 8) isn’t a true 16:9, but at least, true 16:9 is 1360×765.

  28. I want to clear something up where it pertains to TV video and older “resolutions” because some people may be under the impression that SD analog is based on ### X ### pixels. It just is not.
    So here’s a little explanation and I’ll try to keep it as simple as possible.
    TV [SD analog] comes in several ‘formats’ I’m in the US of A so I’ll talk about NTSC, but Pal and SECAM are similar.
    To keep it simple I will only talk pixels and lines and not about bandwidth, audio, Chroma [color] etc.

    SAR [Storage Aspect Ratio]
    PAR [Pixel Aspect Ratio.]
    DAR [Display Aspect Ratio.]
    SAR x PAR = DAR {remember that.}

    Analog video needed to be able to display a picture from a camera so all the cameras and TVs needed to be compatible.

    -Highly technical stuff
    A TV contains a CRT [Cathode Ray Tube] which consists of a beam, coils, and a phosphorous surface or ‘screen.’ The beam emits electrons onto the screen at varying light levels based on the input to the beam emitter. The coils deflect the beam to different areas of the screen.
    A voltage across the horizontal coil deflects the beam left or right. The Vertical coil, up or down.
    The beam moves left to right across the top edge of the screen and is emitting at varying levels creating the first line.
    The beam is turned off as it reaches the right side and is returned to the left side and skips 1 line and the process repeats all the way down the screen filling in the odd numbered lines.
    At the bottom right, the beam is turned off while the beam is retracing to the top left plus one line and the process repeats again filling in the even numbered lines.
    Each pass takes ~1/60th of a second and the whole screen is drawn in ~1/30th of a second.
    This process is called interlacing and is the [i] in 480i, 1080i, etc.
    If that beam was traced line by line, top to bottom, without skipping it would be called progressive scan [p] {720p, 1080p, etc.}

    So NTSC analog consists of lines scanned interlaced at 30 Frames per second [60 half frames.
    The number of horizontal lines is 525. But only ~480 lines are visible. The screen was rectangular at a 4:3 aspect. So a 12″ wide screen would be 9″ wide [only the visible part.] CRTs were imperfect so the screen might not be exactly that size. The vertical height then is ~480 lines.
    The horizontal ‘lines’ are depicted not as an absolute, but as a function of bandwidth [Which I won’t get into.] Bandwidth figures in such that the analog signal is such that a certain number of dots can be discerned on a line. The old way of stating this was horizontal resolution. That spec was further confusing because they wanted to talk about a square screen so they would only look at the center 3/4ths of the screen [3:3]. A ½” VHS tape had a spec of 230 ‘lines’ which meant that on playback the center 3/4ths of the width of the screen made it possible to discern 230 dots on any horizontal line. U-matic 3/4″ tape had a spec of ~340 ‘lines’
    So, to say TV was 480i is an over simplification.

    Digital displays [LCD, LED, etc.] use actual pixels or points so the ‘native resolution is the actual number of points on a screen.
    Digital SDTV is given a spec of 640×480 which approximates VGA video which is natively SD480p, not to be confused with HD480p
    Video is still either interlaced or not [progressive] and hence we get [i] or [p].

    As if that wasn’t enough, they introduced non-square [ANAMORPHIC] pixels which allows a non ‘native’ 16:9 video to LOOK 16:9 EG: 108i is NOT 1920x1080x60p [SAR] and 1:1 PAR!
    It is 1440x1080x30i [or 60i, or 24i…] at 1.33333:1 PAR. So 1440×1.3333:1080×1 =~1920:1080 so the DAR is ~16:9 and it looks right on the screen.

    So, if you want square pixels, stick with 16:9 SAR [p] video and 1:1 PAR

    But in an imperfect world we have [i] video so beware.

    Also don’t confuse a DAR with PAR

    DAR is what the displayed video looks like. [4:3, 16:9, etc.]{Or 1.3, 1.77:1, etc.}
    PAR is often 1:1, 1.3333:1 and sometimes 1.5:1, 1.66:1, or even 2:1 and denotes square vs. anamorphic pixels


  29. Here’s a nugget.
    Have you ever wondered where they got 16:9 as a standard? I note that 4:3 is the old standard and 4 squared = 16 and 3 squared = 9
    So 4*4/3*3 is the same as 16/9. Weird?

  30. Hi pacoup, Your answer to winxp made clear my doubts. I’m producing screen-cast video for first time.

    My screen resolution is 1366/768, when i produced video it looks crisp clear in my screen. However i don’t know how it would look in bigger screen resolutions. Would there be any degradation when video is viewed in different(higher than 1366/768) screen sizes?

    Do i need to record in 1360/768 for less degradation of pixels when viewed in smaller or bigger screen sizes.

    Your article is extremely helpful. Thanks for your effort.

  31. When screencasting, your best bet is to record in the resolution which your client is going to see, to avoid dekstop UI elements being stretched, etc.

    However, nowadays, especially with high pixel density displays, it’s not necessarily a feasible thing.

    The only thing you really want to ensure is that people will be able to see what’s on the screen. So if your target is YouTube, maybe set your recording resolution to 720p, or 1080p with a boosted font size if possible.

    It’s a difficult question to answer without testing the result.

  32. Pleeeeeease change “DIVIDABLE” to “DIVISIBLE”

    It’s giving me a headache just thinking about it

  33. Hi
    i don’t understandt what you mean about the “divisible by 8” thing i can take any number and divide by 8, så what makes 1792×1008 more special then 1776×999 or 1808×1017?

  34. No, you can’t take any number and divide by 8. That’s not what divisible means:

    (of a number) capable of being divided by another number without a remainder.
    “24 is divisible by 4”

  35. Hi Pacoup, very nice explanation. I would like to ask you if I can follow the same idea for the 4:3 real resolutions.
    Regarding the 16:9 , why have to be the no 8 and not another number? Can u please give an example?

  36. The number 8 is because of the way transform blocks are typically limited to 8×8 samples, i.e. they are not usually smaller.

    This means that an odd resolution not divisible by 8 can produce visual artifacts with some encoders, so standards follow the strict 8×8 rule to avoid any issues.

    Although this table lists all perfect 16:9 resolutions, those highlighted in green should be your target, or, even better, those with a standard.

  37. Thanks for this!

    I just started learning animating and was looking for a resource with the correct pixel-dimensions for widescreen 16:9 screens.

    #badass and #kudos

Leave a Reply

Your email address will not be published. Required fields are marked *