The idea behind TDD is that creating the tests is part of the development. You shouldn't really calculate it as development + unit test, the better way to view it is development = functionality + unit tests, where creating the unit tests is not a separate activity. While your developers are working on the functionality, they are also creating the unit tests.
Likewise your task board shouldn't have a separate task for "Create Unit Tests" - the unit tests should be implemented alongside the functionality, there really shouldn't be any separation.
It shouldn't be like 1.5 weeks of development and then 1.5 weeks of unit tests. Instead, you want to have 3 weeks of development and end up with the functionality in a fully tested state. If your Definition of Done specifies that every piece of functionality is unit tested, then there really shouldn't be any separation between getting the functionality implemented and the unit tests done.
As for your other question regarding the integration of QA: We have QA resources as part of the team, testing each feature as it becomes available, with a daily (or nightly) full regression test. If a feature's implementation is done on day 3 of the Sprint, our QA will start testing it on this day - ideally they have developed the test scripts at the same time as the developer implemented the feature.
Both need to work hand in hand and with a lot of collaboration. There's lots of communication between the developers and QA in the team. "Hey, feature A is now ready, you can start testing it." and "I found the following issue in my test for feature B, can you please fix this?" We have this back and forth all the time during the Sprint, with the goal that everything is tested and working at the end.
Combine that with a full regression test run every night, where the results are published, and you can pretty much develop up to the last day of the Sprint. Testing should happen at all stages of the Sprint, not just the last couple of days.