Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
HandBrake 1.0.0 Released (handbrake.fr)
780 points by bomanbot on Dec 26, 2016 | hide | past | favorite | 174 comments


HandBrake is one of those pieces of software that I've never even had to consider looking around to find something slightly better, it's always done what its supposed to with no fuss. A while back I wanted to rip a DVD my kids got so they could watch it on their tablets and downloading HandBrake was such a no brainer that I entirely forgot that I don't have an optical drive built into any of my computers anymore before installing it.


I keep Handbrake around even without optical drives - although it's been a few years since I last ripped a disc, it's still easily the best general-purpose encoding tool I've used. Nothing else really approaches its combination of reliable, high-quality output, fast performance, and uncluttered UI.


My annoyance with HandBrake was that it requires so many clicks to rip all the videos from a disc while preserving all audio and subtitle tracks. My kids have a bunch of DVDs where several TV episodes are stored as a single title with chapter separators. Getting HandBrake to split them up was also kind of a pain in the GUI.

I ended up making a wrapper for HandBrakeCLI to simplify this: https://github.com/xenomachina/dvdrip


When I switched from OpenSuse to MacOS as my primary desktop, I needed a replacement for DVDrip (although, there is a dvdrip for Mac via brew but I had trouble getting it to work.

Handbrake was the top search hit and immediately became my workhorse for ripping DVDs. (Main use case is to rip my own DVDs for more convenient viewing on another device)

Well done, Handbrake.fr By the way, mine is version 0.10.5 and when I tell it to check for updates, it says:

HandBrake 0.10.5 x86_64 is currently the newest version available.

I guess the update isn't in the queue quite yet.


If your encoding queue is empty than it should update...otherwise you can always find it here[0]

[0]: https://handbrake.fr/downloads.php


I love this software. I rip my kids' DVDs using it and play them on a Raspberry Pi with Kodi. This way I don't have to wade through menus, language selection (never defaults to mine), commercials, and ridiculous piracy warnings (I paid for it! Don't treat me like a criminal).


Yes, the absurd of piracy warnings is ridiculous

Really makes me think twice about giving them my money


Nothing makes me want to go out and pirate something purely as an act of defiance than unskippable piracy warnings.


For me, it's definitely the un-skippable advertisements. Like, as a software developer and user, I despise when an API is exposed that allows an application or website to suspend expected behavior in favor of custom hijinks, but to me, that's exactly what ignoring my menu and main-menu buttons is.


Especially when it is enforced by licensing terms. At some level, it is just a bit that gets interpreted by the software as "ignore these user commands". It is very reasonable to make a player that listens to the user commands, even if that bit is set. At a different level, it is a requirement under the terms of the patent licensing to implement this "feature", or else be in violation of patent infringement.


But mostly, it annoys people who actually purchased the disc!


Have you been able to figure out ripping Disney DVDs? Ex. Cars?

Disney is doing some funky disc encryption and I didn't find a solution with a few hours of googling.


I often wonder if archivists are out there somewhere, armed with HandBrake, ripping every DVD they can find into a digital format for preservation beyond the life of the disc.

I've been fascinated with some projects that have tried to recreate the original theater experience of the original Star Wars,[0] or groups trying to capture classics that influenced Chinese cinema but haven't been widely reproduced, like Red Heroine.[1]

If everything moves to streaming though, even that could become impossible. Wonder how long until they'll stop printing DVDs...

[0] http://webcache.googleusercontent.com/search?q=cache:b1Dmiou...

[1] https://www.youtube.com/watch?v=Obpyt_tYxCU


Rest assured that there are archivists ripping their discs losslessly somewhere in the dark corners of the internet. Hidden because the copyright mafia will otherwise ruin their lives.


I once knew of a group like this. eventually, thy disappeared - but it was the most wonderful way to access otherwise unavailable arthouse and experimental cinema. I miss it.


I'm curious: how did they meet? Secure irc?


(I'm not OP)

Experienced the same. I never got into this via IRC. My main thing was music genres, but also on the side quite some non-fiction(documentaries), and a few fiction. I met some awesome people on Soulseek and went from there into DCPP and FTP. This was around 2007-9. DCPP being the frontend for the users. It was a gentlemen's club and from there you just meet different people who see your collection and who invite you into different circles. Back then, there was an auto trader app written in Java, using self signed SSL certificates (w/o pinning). I never understood why they didn't just use rsync over SSH, but it takes more than 1 person to change such habits.


Look up the history of what.cd


And now I'm sad again.


Also read the "how music got free" book, covers allot of that stuff.


sc? i used to knew a few groups like this that concentrated on obscure arthouse films, but i've long since forgotten their names about from sc. there was another one that had something to do with a crow i think. hmm.


I actually can't remember. I was a teenager then and budding cinephile. I stumbled on an invitation to the group through an online acquaintance in a completely unrelated message board.

It was an adventure to be able to pull down obscure work from people I'd never heard of previously. As it goes, a lot of the stuff was above my head - but occasionally I'd stumble on something amazing and go deep on that creator. Later, my university had a very good cinema library. You had to request films my name, necessitating research and sapping some of the adventure of chance.


What's the best software for lossless archival ripping?


I have used DVD Decrypter [0] (unfortunately Windows-only) to make ISOs from DVDs. All the menus are retained, so I can still access special features (image galleries, character profiles, etc.).

I also hear good things about MakeMKV [1], which apparently allows some sort of lossless ripping of video files (I haven't used it, so I cannot confirm this), although the MKV format does not support menus.

[0] https://en.wikipedia.org/wiki/DVD_Decrypter

[1] http://www.makemkv.com/


I'll second DVD Decrypter. I archive DVDs with that, then use handbrake to pull all the useful bits (the film and special features, or programs in the case of TV series) for Kodi.


Works via wine although some disks stopped working in recent years with the software at all that is not with wine


I've generally used MakeMKV to get from protected-source to open-copy, but still as MPEG-2 streams without any loss / transcoding. That's not what you'd use as an archivist - you would want something that does structure-level copying instead, so you get menus and chapters and whatnot. Unless you're talking about archiving just the feature, then it's about perfect and lets you preserve everything you care about in a single file.


MakeMKV preserves chapters.


This is an interesting one, in that most data on DVDs is already stored in lossy formats (such as MPEG-2 for video https://en.m.wikipedia.org/wiki/DVD-Video ). The best form of preservation is to rip the contents as a DVD ISO, which will use the same video and audio formats, as well as preserving extra features like the DVD menus. Any DVD ripper that can rip to a DVD ISO format and perform checksums on the ripped contents will suffice.


h.264 or h.265 is going to yield a much better compression ratio for equivalent quality, even after transcoding formats.

On the order of 2:1 or 3:1 vs MPEG2 format for DVD.


Lossy in top of lossy is never going to be the best approach for archival purposes, which is what the GP was looking for.

If the question was about balancing quality and file size I would've given a different answer.


You can just use bash to make a 1:1 copy:

  sudo cat /dev/sr0 > ~/example.iso


Streaming doesn't stop it, you just record your screen. It's not perfect, but it works.

DRM is dumb.


> Streaming doesn't stop it, you just record you let screen

Not necessarily, see HDCP.


> Not necessarily, see HDCP.

Not necessarily, see HDCP strippers.


Yes, but it stops easy copying. Don't let good be the enemy of perfect.


Really it makes it easier to download the pirated copy, where someone's already done the work of getting around the copy protection, plus removed the menus and unskippable ads and copyright warnings.


Interesting reversal of that saying.

Regardless, my point was the futility of it because screen recording is acceptable, not that we shouldn't strive for better.


Ironically, you're allowed to make a copy for yourself if you rent.


But you have to delete the copy when you give the original back so what's the advantage?


> But you have to delete the copy when you give the original back so what's the advantage?

You do in the USA? Why? AFAIK, not in The Netherlands. The copy was made from a legal source and in a legal way, and is therefore legal regardless if you do or do not possess the original copy.


Yes, in the USA you also have to destroy copies when you sell a movie or CD to someone. This is the whole point of copyright - to limit distribution. Otherwise running a rental service wouldn't be any different from hosting a download site.


yeah they're all here: https://passthepopcorn.me/


How well do these trackers prevent - rather than invite - legal prosecution for their users?


The people that go after these sites are more interested in prosecuting the site operators than the users. I think they want to make running a private torrent site such a risky endeavour that no one wants to take the risk. That said the best sites have operators with some experience at avoiding the law.


they don't really do much of anything to prevent legal prosecution. they're only still around because they're relatively small, invite-only communities. if it ever went public it would get shut down in a day.


Its also pretty pointless when the Mafiaa do manage to get a site taken down new ones crop up almost immediately. What.cd was taken down recently and now has 2 or 3 viable replacements.


HandBrake found me a defective RAM module on my PC: it freezed in the middle of video conversion - every single time on exactly the same video position. After further investigation I found the "bug" in my RAM...


How is that even possible with virtual memory and paging? Why would the same video data or whatever go to the same physical location every time?


1. The OS likes to fill physical memory with cache, rather than leaving it empty, because modern memories zero quickly enough for the OS to pretend it has "free memory", only actually freeing it on demand.

2. "Clean" mmap(2)ed pages count as "cache" in the above.

3. Memory allocations are usually physically contiguous—when you mmap(2) something, and then read every byte of that thing in order, that thing usually ends up in a contiguous run of physical memory as clean page-cache entries. This will be true to the extent that you have no other memory pressure forcing the page cache to fragment, evict other caches, or overwrite itself.

4. ASLR just juggled virtual-memory, not physical memory. IIRC (not a kernel dev), Linux at least has an allocation strategy that will effectively allocate physical memory serially if there's no contention.

Put 1-4 together, and you get a system where a big mmap(2)ed file that takes up all the physical memory of an otherwise-idle system, will end up putting that thing into the same places each time (because that's the only place such a large allocation will fit.)


I can imagine bunches of possibilities. Paging can make things hard to predict, especially when multiple programs are allocating memory, but it doesn't make the system non-deterministic, nor does it make hitting the same physical address impossible.

One possibility is that he didn't restart the program between retries, and the memory in question was already allocated. Another possibility is that he only ran handbrake and nothing else, and the OS was in more or less the same state both times. It could be that the problem was triggered by stack allocations rather than heap allocations and the video block in question caused a large-ish recursion that hit the problem, and would be likely to hit the problem no matter what was running since it's somewhat rare to have large stack allocations.

Chances are it was actually none of those things, but they're real possibilities anyway.


Maybe my Handbrake installation was broken because of defective RAM - I don't know exactly... anyway: I found the problem was RAM and now it works...


It's actually scary how much (unpredictable and maybe undetectable) stuff can happen due to bad RAM.


I once had a bad RAM socket. I sent back RAM that failed memtest86 and was rather confused when the next set failed in the same way.


Yeah! Bad bit baked into the executable is a strong possibility.


Presuming "frozen PC" means "unresponsive and must be forcibly rebooted, there would be no retrying with the program without restarting.

What with Windows Update and the variety of other similar OS- and application-level auto-updaters, is getting the computer into a very similar state likely? I'm not sure but my gut says no.

That said, at first I was imagining a desktop computer with 4 or 8 memory modules, but given a machine with just 2 modules, maybe it follows that one module usually gets filled with "core stuff" and the second, defective module somewhat infrequently sees "big user stuff" after the first module is filled, and I guess that isn't too much of a head-scratcher when it comes to identifying the source of the problem.


> but it doesn't make the system non-deterministic

Actually, it does. It's possible to calculate WCET involving ram accesses, as the behaviour is deterministic; there's a set latency.

It's not possible whenever SWAP is involved, which is why most of the realtime world simply avoids swap. This is mentioned in the Genode handbook, if you're willing to dig into it more.


I guess I need to clarify I was talking about read/write location determinism, and not timing. Timing could affect things, but it also might not. The question is only whether the same bit was touched, not whether the entire system state is identical in every aspect.

I assumed that swap wasn't involved, and I was even going to mention it but decided against. While it is remotely possible, there's not much reason to suspect swap, Handbrake was actively running, doesn't normally use enough memory to start swapping, and people ripping DVDs usually know not to be doing other things and/or using all their memory while ripping.

That said, are you saying swap in the OS really is non-deterministic by design, or just hard to predict? And what does Genode have to do with this, assuming he wasn't using Genode?


Before ASLR this wasn't unusual at all.


Amazing. These are like Bill Brasky stories or Schneier facts.

If Handbrake fails, it's only because your hardware is broken.


It's funny you should say that because the one time Handbrake failed for me Bill Brasky showed me how to hand solder my CPU, and I'd be your aunt if it didn't fix Handbrake. Bill Brasky, RIP.


That's actually super interesting! I honestly wish I knew more about RAM to understand how that one process exposed the bug. :(


MemTest86 showed me my RAM is defect at exactly 1 bit: whatever you wrote to this bit, you got 0 back. During video conversion HandBrake used this part of RAM and could not cope with false data read from RAM... and froze...

Since I used this machine to backup my data on DVDs I got many errors in my data on backup. But since I had a copy of everything on another HDD I throw the DVDs away and backed up everything again (after replacing defective RAM module and reinstalling OS and SW on my machine).


> HandBrake used this part of RAM and could not cope with false data read from RAM

Which in itself alludes to the encoding stream itself (or one of the intermediate streams during the encoding process) having some form of strong channel coding - in error detection - either by design or not. Detect 1-bit error in a gig+ stream, perhaps without easily being able to locate it, beyond saying it's in such or such chunk (the current frame(s) being processed).


Best way to find memory problems is intensive multi-threaded video transcoding and intensive compiling (say installing gentoo).


Or you know - running memtest86[1].

[1] http://www.memtest.org


Indeed, I'm also a fan of http://www.advancedclustering.com/products/software/breakin/ - use it for soak testing systems before giving them real load


Wow, this looks awesome, will definitely check it out, thanks.


It is amazing to see software that is about 10 years old just hitting 1.0. Never really quite understood that. Is the developer just not confident in it that it is in beta for a while? or is it just a style of versioning? Anyways glad to see development on handbreak.Great software!


Lots of open source takes 10 years to hit 1.0.

Though 1.0 usually has a different meaning in both world's.

1.0 usually includes a set of features the project had in mind at the beginning in open source. Whereas in closed source it usually means the first version that works to a minimal extent.

Closed source 1.0 equals open source 0 point something.


I generally take version numbers near 1 to be market communication about stability/fitness. Above that, I take version numbers as a loose grouping of significant change sets.


Versioning is generally arbitrary. I actually don't know why people pay so much attention to it.

1.0 for some people, is 0.1 for others.

1.0 might mean it's stable, or it could meant that it's feature complete. As you say it could also be used to convey the confidence the developers have in the software.

The thing I find most important myself is conveying compatibility, i.e. semver. To me that tells me it will be easy to upgrade a library, or it could be hard.


It is not completely arbitrary though. Usually 1.0 for proprietary software means "the first version shipped to customers", while for open source, the very first commit is 0.0, and then it goes from there. Since it is public from the start there is no 1.0 moment. Sometimes, especially for programming languages, the 1.0 version is used to signal that from no on there will be no arbitrary change to syntax or semantics after that, so that it can be relied on.

Traditionally, version numbering has been used to signal the significance of the release. for version x.y.z, you could expect that

  * x is incremented: Major new features, possibly incompatibilities
  * y incremented but not x: Minor new features and bug fixes.
  * z changed only: Bug fixes only.
This was generally observed in both proprietary and open source software alike, and is still used in many projects. Recently many projects has abandoned this pattern, including Chrome and Firefox, the Linux kernel and others.

Of cause there has always been a pressure from the marketing departments to have a new major release, while the engineers has been holding back, so you have always seen major releases that isn't that major, and sometimes incompatible changes sneaked into minor releases. The latter has generally been considered bad form.


I can generally agree with this save for two cases. One, I've found that major bumps in a programming language are significant, especially in the case of something like python 2 vs 3. Second, a lot of major javascript players have been using major bumps mainly in the case of breaking APIs, removing functionality, etc (ex: Lodash, ExpressJS, and - soon - React).

Contrarily, I've noticed in Rails that A LOT of major refactoring and functionality has been added via minor bumps that, while still backwards compatible for Rails itself, often breaks custom implementations, usually related to javascript modules and configs.

Overall, though, I agree with you.


Then there was Java, where 1.5 meant 5.0. And Win 7, which is 6.1 internally.

(When engineering and marketing aren't on the same page?)


You develop and test as fast as you have time to.

HandBrake probably does not bring money to its developers, so the pace is inherently limited.


It's not necessarily about the pace. Some 0.x software is more feature complete than other 1.x software. It's not possible to compare versions between different pieces of software to infer anything meaningful about how much work went into them.

To give an example, it took VLC a long time to reach version 1, yet even in its 0.x releases it was amongst the best video players available (in terms of features, performance and stability), better than many other closed-source competitors with higher version numbers.

In other words, aside from libraries and programming languages (where major version numbers do have extra significance), version numbers are only loosely coupled to software quality.


HandBrake offers a really nice GUI for many one-off transcoding tasks. If you're looking to automate transcoding tasks with some scripting, handbrake-cli (or ffmpeg directly) are very powerful, albeit overwhelming at times.

For something in the middle - offering both convenience and scriptability - I recommend video_transcoding[1] (uses handbrake-cli and ffmpeg under the covers). It's a really handy set of command-line tools that eliminate a lot of the guesswork and frustration.

[1] https://github.com/donmelton/video_transcoding


I'm a little surprised they aren't signing their MacOS releases. It's even documented on the download page, "We are not currently able to sign the HandBrake downloads". I wonder if it's a philosophical choice or a legal one? It seems like a failure of Apple's Gatekeeper though: either because such a popular app is not able to be signed, or because it's not signed and yet so many people run it anyway.


> I'm a little surprised they aren't signing their MacOS releases.

Do any small developers actually do this? It seems entirely useless from a security prospective. You go through an expensive process so that at the end it can "verify" that the binary was signed by an individual the user has never met who may not even live in the same country and for all anyone knows is perfectly willing to sign ransomware, or who has stolen some arbitrary third party's signing key.

If you don't actually know and trust the party who makes the software then the signature is worse than useless because it makes people think signed=trustworthy when in reality it only means signed=signed. And if you do know and trust the authors you don't need a CA to verify anything more, at great expense, when you can already just download via HTTPS from the domain you trust.

Apple should eliminate practice entirely, and in the meantime no one should use it.


> If you don't actually know and trust the party who makes the software then the signature is worse than useless

Not true. The signature only needs to mean "we've verified the author's ID and he lives in a country that enforces the law". Then if he ships and signs malware, he can be sued and/or charged criminally.


> The signature only needs to mean "we've verified the author's ID and he lives in a country that enforces the law". Then if he ships and signs malware, he can be sued and/or charged criminally.

This is what I mean by worse than useless. Promoting reliance on the signature to mean something.

To pick a country, quite a lot of entirely legitimate software comes out of Russia. So does a lot of malware. Does Russia enforce the law? Sure, against people who aren't politically connected. Some of the malware authors are, so you're screwed. You can't just write off a country like that. There is still a baby in that bathwater. And that's not the only country with organized crime or corruption.

As soon as you have many small developers signing things you can't even really exclude by country at all because there are too many soft targets for malware authors to steal keys from. Some college student gets a signing key to sign his calculator app and then gets hacked, and now there is malware signed by John Smith of New Jersey. By the time anyone figures it out the attackers, now equipped with the false sense of security created by the signature, have hacked many other people and captured even more signing keys.

It's like security theater where the criminals pick your pocket while you're distracted watching the show.


Yes, a lot of developers sign their releases, I'd say it's about 50/50 of the things I use. There are some advantages to signing, but I'm not sure it's enough to outweigh the hassle.

It's hard to talk about Gatekeeper without also talking about the Mac App Store, which as far as I can tell is a fiasco in all ways.


Apple's codesign ensures end-to-end chain-of-custody integrity with nonrepudiation, tied to a specific, named signer and likely also a credit-card.

GPG signing and releasing fingerprints of all released artifacts on an https://-served release notice would accomplish nearly the same thing, but requires more steps and causes a confusing `“HandBrake” can’t be opened because it is from an unidentified developer.` dialog.

It is a best-practice to both GPG sign all release artifacts and use vendor-specific code-signing / app stores, otherwise conversion will suffer with each additional hoop multiplied by the N of the entire user-base resulting in much more time-wasting.

End-to-end integrity also prevents entire classes of attacks such as hacked CDNs, hacked networks and so on.


Universalizing GPG would be an improvement.

The "end-to-end chain-of-custody" is actually the problem, because it does two bad things.

First it encourages people to give it more faith than is due. Having an ironclad guarantee that something is approved by a specific untrustworthy person can do more harm than good when people see the guarantee and not the guarantor.

Second, when the process has barriers (e.g. for poor students or foreign nationals), you get a lot of legitimate software that isn't signed, which means you're harmfully desensitizing users to security warnings. Or locking out legitimate software.

Suppose you replace that with automatic GPG signatures, where the software has to be signed by the author but the author doesn't have to be signed by anybody else. You still have something useful -- you can verify that two pieces of software are from the same author. And that updates are from the same author as the original. And the author can publish their public key to their website, allowing security-conscious users to link the software to the trusted website.

Meanwhile signing becomes only a checkbox with no gatekeeper deciding who can and can't sign, no one is excluded, so everything can be signed there are no spurious security warnings for legitimate software.


There is no expense to have a developer account with Apple to sign releases. You have to pay to distribute on iOS or Mac App store. At this point so much of it is automated by xcode that there are no real extra steps to do simple developer signed release.


The expense isn't so much money you pay to Apple, it's the typically multiple hours and hundreds of dollars necessary to get an EV certificate from a CA.


Code signing on macOS does not rely on CA certificates but you have to be a member of the paid Apple Developer program (99$/year)


I'm not using 1.0.0+ until it's codesigned and/or dmg gpg verifiable with a known-good signature.

Do not install untrusted, unverifiable apps is security 101.


Just download the source code, audit it, and built it yourself.


Who has the time to audit the source code of all the code they'd like to run on their computer? If you find them, I'd pay them to audit it for me, build it, and sign it so that I don't have to.


I'm a huge fan of HandBrake and excited to see them still improving the application. The last time I used a DVD ripper was >5 years ago but it was an essential tool for me earlier in life. I'm happy to see I will still have it available should I need to use it.


I haven't ripped a DVD in ages, but I still use HandBrake all the time for user-friendly h.264 encoding for crushing files to fit on my phone or on the web. It works with any input that ffmpeg supports.


I love Handbrake. It's my goto for video transcoding.

I often download stuff for my children from YouTube using a YouTube downloader, and then transcode them to the ideal iPad format, so the children can watch stuff in the car on the iPad without an internet connection. Great for long trips.


You could use youtube-dl and request the content in an iPad compatible format. It would save you the waiting time for the transcoding


If you get YouTube Red (which also saves the kids from ads), you also get offline support built into the mobile app. Just in case you weren't aware.


An iPad can't play regular MP4/H264 videos by default?


It can, but you can optimize the encoding so that the iPad needs to do less work playing the video.


Not all of them, at least. There are some restrictions with regard to profiles and levels that are supported (i.e. bells and whistles that the codec can use).


I've so much respect for the team behind handbrake. What a quality piece of software. Kudos to them to making it to 1.0 -- I hope to be using it for many years to come. It's cool to see H265 support too. That's something I look forward to trying out.


Previous releases (0.10.something, maybe even 0.9?) already had H265 support, IIRC, two years back or so.

And yes, great team.


I just downloaded it for the first time in a while 3 days ago and noticed that H.265 exists and compresses twice as well at the same quality level... how in the hell did I miss that? (VLC will play them). I did a test on a full-rez MKV and worked great


Most hardware video players (like Raspberry pi etc) can't decode H.265 using hardware acceleration, a RPi 2 (haven't testen RPi3) does not have enough computational power to play a FHD H.265 at 24FPS.

Until hardware H.265 decoding is introduced to the popular media center hardware, H.264 will remain the codec of choice for most people.


> Until hardware H.265 decoding is introduced to the popular media center hardware, H.264 will remain the codec of choice for most people.

Might never happen, thanks to netvc.


link to the netvc issue?


Not an "issue", but an effort, backed by many high profile players, not least Google, Mozilla, and Cisco, to create an open, free, alternative: https://en.wikipedia.org/wiki/NETVC


H.265 only compresses twice as much as H.264 in marketing materials, or when not comparing to x264. It is trivial to change the defaults to give half the bitrate but you will not get the same quality.


I just saw bluray rips of the same show on Usenet. x264 = 2.5GB, x265 = 600MB.

I'm not certain the scene releasers with reputations to maintain are going to release terrible quality rips.


The same people who insisted on 192 kbps CBR non-joint stereo for years? Hahaha...


I ask as someone who uses ffmpeg regularly, what does Handbrake offer over ffmpeg?


Handbrake offers you literally a GUI over ffmpeg. And sane defaults, setting ffmpeg gets overwhelming fast.

And also some nice queue management, a bit better than what you'd get from shell scripting.

The downside is that it's a GUI.


Re: GUI 'downside' - They mention a 'JSON API' to interact with libhb, though I have yet to find it in the docs.


A cli is available as a separate app


Handbrake uses ffmpeg behind the scenes and adds a nice GUI and many DVD related features. Ffmpeg is a dark art to many.

The JSON API to interact with libhb also sounds interesting.


I learned the ffmpeg cli commands by reading the handbrake logs. Rather than read tons of man pages I used HB to dial in my settings than used those settings in a pipe and tuned it further. It cut development time down by days.


I've used to work with a team that used HandBrake for transcoding - from what one of the developers told me HandBrake handles weird video formats better. Some of the videos would have troubles with ffmpeg which would produce "broken" transcoded videos, videos without audio and some other issues that I don't remember now.

Myself personally, I feel quite comfortable with ffmpeg and I never had any problems with it so whenever I can I use it.


The issues you bring up are not deficiencies with ffmpeg, but rather the user. One can prevent all of the issues you cited with the correct ffmpeg commands and an understanding of how media codecs and containers work.


An issue that causes the user to misuse the software can rightfully be considered a bug.

If a user types 3 + 4 [Enter] into a desktop calculator and gets 8, they probably would think the calculator is defective. If the calculator's manual documents that the 4 and 5 key had to be swapped for a legacy manufacturing reason, then who is at fault, the calculator or the user?

Confusing flags and poor defaults that cause the user [1] to misuse the software are "bugs" too, regardless of whether the author intended it to work that way.

[1] We're talking about the collective user here, a UI will never be intuitive to 100% of your users, but if only 5% of your users understand it that's a problem.


I don't disagree with your opening sentence, but your premise is likely based on an incomplete understanding.

The FFmpeg project produces libraries for handling digital multimedia, and it also offers a command line application. Handbrake, on the other hand, is a GUI-based application meant for DVD ripping and video transcoding. These two projects may appear to share many goals, but there is little end-user overlap and they serve quite different purposes. And I might add -- both projects are _outstanding_.

Handbrake has a specific set of tasks on which it focuses. FFmpeg, on the other hand, endeavors to provide a powerful set of multimedia codecs, container handling libraries, codec- and bitstream-level filters, a high-performance scaler, etc.

In other words, FFmpeg's command line application is meant to be used by power users who know exactly what they are trying to do and what they need done to their files to produce that outcome. Handbrake, despite its extensive feature set, simply does not expose the low-level functionality that FFmpeg/libav does. Under the hood, a lot is going on in Handbrake about which the user is totally unaware.

To take an example from the OP -- in order to avoid faulty initiation of an audio track in a video, an FFmpeg user must explicitly rebuild the output container's time base. If the team is not knowledgeable in these lower-level areas of digital media, then it seems totally logical for them to use Handbrake. Handbrake will auto-detect the need to do this process for each individual source and implement it without even informing the user in its log output.

The FFmpeg project is not responsible for teaching software developers about the fundamentals of digital multimedia, and it is not a 'bug' that they don't do so.


I'm sure that correct ffmpeg commands for various containers would prevent issues, however HandBrake does that out of the box for all the formats that company was using with one command. I don't know specific details, however I do know that ffmpeg was used for quite some time before team decided to switch to HandBrake after running tests on thousands of normal and problematic files.


If you are comfortable with ffmpeg and shell scripting you probably don't need Handbrake. It's basically a nice UI.


ffmpeg has no support for dvd titles or chapters.


- it supports extracting titles from dvd and blu ray

- a gui


How good is the ffmpeg AAC encoder these days? Ages ago when I started using HandBrake, it using Apple's CoreAudio codec on Macs for the AAC encoding was a big plus on the audio side.


It's actually fine for most use-cases. Beats FAAC and VO_AACENC easily and it's slightly worse than FDK-AAC from Fraunhofer.

In short - just fine to use and it's the default (without -strict) now.


Apple's AAC encoder is still the best out there for high bitrates.


libfdk_aac is fast and good


Which is not the default ffmpeg encoder, and due to licensing you can't distribute libfdk_aac and x264 in the same ffmpeg binary.

There's a built-in ffmpeg AAC encoder too, but despite the author's claims it is not as good as libfdk_aac.

And libfdk_aac itself isn't as good as Apple's AAC codec either. There's a Hydrogenaudio listening test demonstrating that.


Unfortunately it's proprietary, because of unclear license terms about distribution of source code and modifications (saying that you cannot charge a "copyright license fee") and it clearly says that patentable ideas in the source code are not licensed.


If you don't run ffmpeg all the time, this makes it easy. Pop in a DVD click a few buttons, and it's ready for your iPad. Try it!


Ease of use and easy installation on windows.


I stumbled upon this project about 2 months ago. Wanted to convert a bike race video from avi to mp4.

It worked surprisingly well. Glad to see a new version of this released.


how was the speed?


Pretty fast. He finished in third place.


Wow, I used to use this many years ago when ripping DVDs was a thing and also for the occasional transcode to mkv. I had no idea it was still in development. I'll have to check it out again.


Why transcode just to package media into an mkv container? Mkv is pretty much codec agnostic, you could probably just steam copy. You'll save a lot of time and audio-visual quality by doing so.


I transcode because H.264 saves me a lot of space over MPEG 2. I don't notice the quality loss but I do notice the disk space and faster file transfers.


Also, most modern players are much faster at seeking through a proper mkv that an old avi or a much larger file.


This is true indeed, mkv is a great container. The reason for my question was that the OP appeared to be transcoding (a codec-level operation) just to switch the container type. When it was mentioned that the transcode was to convert from MPEG-2 to H.264, my question was answered. :)


What's the difference between this and FFMPEG?


It has a nice GUI frontend with useful presets, understandable for the layman. Very good tool for the people not too versed in CLI and/or video formats.


Queue management (really great). Ability to extract titles from DVDs.


Ah brings back memories of my college days when I would go to the library at night when the computer labs were empty, check out a dozen dvds on 4 hour loan, and use a separate computer to rip each one at the same time.


Fantastic software that I use almost weekly and have done so for many years.


odd how checking for updates via Handbrake's in app update checker (is there a better way to write that?) fails to see any newer version. So I had to get it via their website. Meh.


Great timing! I just built a new workstation over the last few days, put a BluRay/DVD/CD writer in it, and am looking at the ~15 or so new movies we just got over Christmas that I'm planning to rip. Gonna set up a "media PC" hooked up to the TV to play our movies over the network.


I've always found Xmedia recode to be a little easier if you want to change advanced options

http://www.xmedia-recode.de/en/


I haven't upgraded in quite a while due to there being a shift at .15 I believe it was where the AAC codec used had licensing issues (whether with the library makers or in what the library makers were implementing). Only affected Windows and Linux I believe. Is this still an issue? Is there a way to rebuild it including the lost component(s) as I believe they mentioned at the time the replacement was inferior?


What a really nice Christmas present from the HandBrake team! Kudos, it's my favorite video transcoder since it's debut on BeOS back then...


Finally, it supports VP9 and Opus!


I hope this fixes the issue I, but seemingly few others have, which is that the sound drifts behind the video getting progressively worse as the video continues.

By 60 minutes in the sound is a full five second behind the video.


Does this only happen when using variable bit rate audio, or does this happen with fixed bit rate audio as well?


Unfortunately both.


This was incredibly useful to me when I was still in high school. We used it in lieu of Compressor (from Apple Final Cut Studio) and I remembered it was quite a bit faster than Compressor.


The only thing I found strange about HandBrake is the subtitles handling. (Somes I can't just copy them 1:1)

For everything else I loved it.


Pretty cool, faster is now even faster. My dad can't tell the difference and I have to encode stuff for him often


How does this compare to say ogmrip?

I haven't used either in a long time and would appreciate the input.


This is interesting information must share this


so i see that this is based on libav (ffmpeg libs). does anyone know of an up to date tutorial on using libav? I want to write a music player and i've tried to use libav but the documentation is almost non-existant


If you're looking for something more advanced, have a look at MeGUI


...Wait, it wasn't 1.0 already?

Could've fooled me...


Honestly, it already felt like a 1.0 product back when Netflix was a new company.


...Exactly.


I did not know it wasn't officially released yet. Really good stuff.


Don't use SHA-1 please.


Verifying a downloaded file doesn't require a cryptographically secure hash function...


Of course it does, otherwise a malicious mirror can (theoretically) work to find a collision between their malware and the legitimate file and serve you the former.

There's no good reason not to use a secure hash function.


If your threat model involves an attacker who is able to achieve a hash collision while still implanting a sophisticated malware, you should probably avoid downloading software from random websites...


It would be pretty impressive, as they'd need their malware to both do what they want and exactly match that hash. Not impossible, just clever.


well it does require one that can't have collisions. otherwise what's the point in "verifying"?


There is no hash function that can't have collisions by definition.


*isn't known to have collisions, then


They have moved away from SHA-1, and are now using SHA-256. Previous releases were signed with SHA-1, and before that it was MD5: https://handbrake.fr/checksums.php

I'm not sure why they haven't retroactively calculated checksums for older versions.

But you're right that SHA-1 needs to stop being used: https://sites.google.com/site/itstheshappening/

These researchers have found the first "freestart" collision, and they estimate the SHA-1 collision cost to take a few months, costing between 75K$ and 120K$.

Practically speaking, I don't think anyone could make a profit by forging a Handbrake release, but the FBI probably do have some very high-profile targets who use video encoding software.


Great stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: