How much scaffolding your Selenium UI tests require or how long it takes to run them are genuine practical factors. You might rationally segment such tests into an "beyond unit test" suite. Just don't eliminate that testing step "because it isn't a unit test." If you're working toward software quality, you can't be "saved by the bell" of what makes a pure unit test or not.
A traditional response would be: Well, you need to add in other testing layers (module, integration, specification, acceptance, ...) and test those things as well--but that's outside the scope of pure unit testing. But that distinction is often used to punt on doing those other forms of testing, or leave it to someone else. "Not my job, man. I'm just responsible for unit tests. Someone else is responsible for integration and UI testing." That punting is an abdication of responsibility, and inimical to TDD's philosophy of integrating coding and testing. It's also a big reason unit tests are infamous for passing with flying colors, even when the code as a whole is still breaking or not doing the job it was designed to do.
There is a place for segmenting testing realms. Performance testing, walk-through-the-UI testing, intrusion testing, and exhaustive compliance testing against all possible supported libraries and platforms can each require spinning up considerable middleware and virtual instances, and might take dekaminutes or hours to complete. Run those test suites occasionally, rather than every code edit. But for goodness sake do not use their "higher level" nature to avoid doing them.
There is also often a notion that integration, UI, and other forms of testing require different tools. While you may need some additional tools (Selenium, in this case), but I cannot see the point in requiring an entirely different testing foundation of test runners, fixtures, mocking and support libraries, and so on. I'd argue that having different infrastructure for 4 or 5 desired layers of testing is anti-simplicity and ultimately itself an anti-pattern.
My experience is that rigid conformance to unit testing dogma--"only one possible reason to fail" and "don't test things together" e.g.--is restrictive and ultimately counter-productive. You certainly want tests to exercise your code in the clear, simple, direct ways. Pure unit tests can and do help with that. But why stop there?
I build "unit" tests with an expanded model of what constitutes a unit--not just individual software atoms or particles, but multiple routines and objects, acting together. Atoms composed into molecules, and then molecules composed into larger molecules.
Relaxing the notion of what constitutes a "unit" and piggybacking unit testing infrastructure for integration and UI tests may not win me awards for testing purity, but it helps larger requirements. It gets more tests written and executed--both in an absolute sense, and per hour of testing effort. And it gets a wider swath of my code tested, in ways that mimic real use.
Isn't that the ultimate goal? Having more robustly-tested, works-together, quality software products? If so, go ahead and use a pure, no-dependencies-whatever model of unit testing to start. Just don't stop there. That slightly higher-level or as-seen-by-a-user tests might fail in several ways, or that they have dependencies--that may be a reason to put those tests into different test specs/files, or to run them less frequently. But it's not a reason to skip them--and often, not even a reason to decouple them from the basic coding process.