It sounds to me like you have a dysfunctional team with a cowboy culture and you're trying to figure out what the root cause is. You are proposing a hypothesis that maybe developers don't respect test because of some sort of implicit hierarchy or length of service or some other factor, but you're not necessarily presenting evidence for the case, you're essentially asking "could this be what's wrong?"
In fact, it sounds like many things are wrong with the organization, and any issues driven by perceptions of status or power are merely symptomatic of poor leadership. You are not an agile shop if you are not practicing any of the enabling mechanisms of agile development. One line stories are not agile; those are mere bullet points on wishlists. A story contains a business motivation, a description of the customer's interaction with the product, and a definition of when the story is done. If you don't have those three things, you don't have enough information to decide what should happen or how you know you've done it right, so the story will never be "done". That's treading water, not making progress. Developers will never be short of work to do in such organizations, because they'll constantly be firefighting, aided only by tiny buckets of their own urine.
Some of the "definition of done" can be part of a general team agreement, but specific acceptance criteria for any story, even if terse, are essential.
There are very few cases in which "automated testing doesn't really make sense for us right now". It may be the case that the test team isn't the right organizational locus to deliver automated testing, especially early on, but it always makes sense to have automated testing. While it's ok in my book for developers to do a little bit of exploratory coding without formal automated tests (I'm not a TDD or even BDD purist), it seems horrifying to me as a developer that I'd consider releasing code to a test organization with no developer-written automated tests. Unit Tests and BDD tests, written by developers, and scenarios preferably written by Product Owners, are essential parts of agile delivery.
Figuring out the best use of a test organization in an agile team is a tricky problem for which there is no single formula for success. In an organization which has no definition of done, it will be highly difficult to demonstrate value, because there's no way of knowing if the test team has contributed to "done." I've worked in old school waterfall teams as well as agile teams with 1) no distinct test organization or 2) moderately integrated test teams, 3) partially integrated test teams with separate stories and work product, and 4) gated release models, where some QA involvement happened alongside ordinary development but there was a distinct "test pass" due to some legacy or regulatory reason.
The "right" model for the test team will actually depend on the level of technical sophistication of the test team members. I think having test team members with moderate or better technical sophistication pair with a developer while writing code, to suggest cases for automation, can be a great model. But a test team can be reasonably effective in validating that the stories have measurable acceptance criteria, doing some exploratory testing as developers check in code, and trying to augment developer unit testing with integration scenarios and fleshing out special cases. It's even sort of ok to have a throw-the-build-over-the wall approach in some circumstances, as long as there's a way of converting stories into test cases and there's some sort of feedback loop with the Product Owners and Developers.
But you won't really get there without active buyoff from your management and product owners on what the organizational priorities are and what test's role should be. I doubt there've been any serious conversations in your team other than "oh, I've worked on other software projects and that means I know we need to have some sort of test effort. Let's hire a test team." Most average and some above average developers will be tolerant of organizational inertia that doesn't demand they engage with the test team. In order for real progress to be made, some management or consensus driven initiative to drive "better" development practices needs to happen.
As a developer and as a former STE, STE Lead and SDET, I have nearly zero interest in how senior the test team members are or how much they are paid. What I care about is how they can help me ship better software. I personally like leveraging the skills of people who can work through tons of scenarios that I can't meaningfully explore given the team's desired organizational velocity; I'd be happy to walk through a test team member on how to start from existing unit tests or scenarios and build better coverage, or read test plans and provide feedback. But I might settle for focusing on "just good enough" coverage on my end and let the product owners and maybe just hope that the testers catch what I miss, if that's all the organization appears to value.
Somehow, either you are going to need to start selling to your management or to your most sympathetic developers on taking a more, dare I say it, agile approach to development and quality. I can't give you a formula for this, because I've not been that great at driving such things in organizations resistant to change, but the best you can hope for is a business value driven case (talking to the business side) or perhaps a craftsmanship/continuous improvement case on the technical side.