Does this mean I should learn OpenCV in another language first in order to utilize it in Swift?
Yes, you should learn it in the context of its native C++, because the value of OpenCV doesn't have much to do with C++, or any programming language, the value is in the mathematical tools it provides.
OpenCV at its heart is a toolkit of mathematical functions. Transforms, color conversions, edge finding algorithms, etc, are all mathematical algorithms that assist in solving the problems of computer vision applications.
That's the reason the tutorials you're finding are just hooking the components up; you're assumed to already understand the mathematics that OpenCV is providing. There's really nothing C++ vs Swift of detecting edges in images, converting color spaces, etc. The only things unique to Swift is getting the build systems and whatnot to play together nicely. All the image portions are covered in the OpenCV documentation.
OpenCV isn't really all that specific to C++ either, C++ was chosen as a good common denominator language. At some point though you will have to understand the practical nature of how OpenCV was implemented, and understand its data structures, which all have a C++ taste to them.
Most typical workflows for developing OpenCV projects usually results in creating, testing, and fine-tuning the algorithms in C++, then porting to the final language at the 'last minute' if that is a required step.