For the sake of theoretical computer science, the branch of abstract mathematics, everything is either done conveniently in base10, which we normally operate in, or base2, because it's the simplest to reason about.
In the more general sense of computer science, meaning the things you're likely to study for a CS degree, the situation is very similar. Pretty much everything will simply be done in base10. The biggest reason you'll work with base2 is architecture classes when you're learning the internal representation of numbers in a CPU & how they're operated on. Base8 and base16 might come up if you find yourself working in assembly or low-level OS operations.
If you get down to it, binary, octal and hex, all being powers of two, are essentially equivalent - they're convenient ways to represent a sequence of bits. As time passes, there become fewer reasons to bother with them for general purpose computing. Using bitmasks (or the equivalent hex codes) was an essential tool in saving memory when you're dealing with a system that only has a few KB of memory but in an era where desktop icons can be over a megabyte, it's seldom worth the hassle. Obviously, there's still people writing low-level hardware interfaces, network services & doing embedded development but most programmers are increasingly isolated from that by layers of abstraction.
I'm not saying it's bad to learn them - it can be quite useful to be familiar with them (for example, Unix file permissions still use Octal) but don't expect to be using them every day.