The short answer is: no.
In fact, the other answer you linked is not really correct either. The critical phrasing that has been missed is that link detection exploits the large size of the AC coupling capacitors to detect connected lanes.
It is subtle, but PCIe coupling capacitance is as large as it is for reasons other than initial link detection. Link detection simply exploits the availability of these large coupling capacitances, but that is not the reason they are as large as they are.
Regardless, PCIe link speed is something that is negotiated and trained long after a link is detected as connected. The RC time constant is not used to determine the generation of PCIe card, and any time constant greater than 7.6µs (probably less, it depends on the host controller) will result in the link connection being detected.
Let me reiterate, the RC time constant of the coupling capacitors is only used to detect if there is a correctly terminated lane on the other end. You could fool a PCIe host into believing a lane was connected using a purely passive circuit consisting of a capacitor and termination resistor and no active circuitry. The host controller would still quickly detect the link as inactive, however, as it would receive no response during autonegotiation.
Regardless, PCIe speed (and what generation the card supports) is determined after a good deal of handshaking and link training has occurred.
The real reason PCIe coupling capacitors are so large is due to DC wander (also called baseline wander).
The AC must truly be AC, meaning it spends as much time in one direction ('0') as it does the other ('1') on average. If you have more 1s than 0s, then the coupling capacitor is going to be charged one way more than it is discharged the other.
This results in a net charge imbalance and the coupling capacitor is effectively partially charged, which will begin to degrade the signal strength of the differential pair, as the capacitor now is acting like a voltage source in series with each half of the differential pair that is opposite the polarity of either high or low signals, depending on the direction of the charge imbalance.
PCIe 1.x and 2.x deal with this problem using 8b/10b encoding, which transmits 10 bits between each clock cycle on the physical layer, with 8 bits of actual data (so 25% overhead). This allows for some cleverness in how you encode those 8 bits such that you can achieve both clock recovery and ensure DC balance with a bounded short run bit disparity.
The disparity will never be more than 5 0s or 5 1s transmitted in a row, and over 2 symbol transmissions, the total disparity will never exceed ±2 bits.
However, this still means that over short, sub-symbol (10 bit) time scales, a bias can build up briefly in the coupling capacitor, and the reduction in signal that results depends on the time constant. The larger the capacitor, the less voltage will appear across it for, say, 5 bits worth of charging time in one direction than a smaller capacitor. Simply because it takes more time to charge a larger capacitor and the voltage across it will be that much less.
PCIe is a fast bus, and there is not a lot of bandwidth headroom. Using larger coupling capacitors ensures that the maximum possible short run bias remains low enough so that the bandwidth and (and thus bitrate) can remain in spec.
As small as the time constant of a single bit on the bus is in comparison, it is possible for the signal to be degraded and the bandwidth reduced below spec even from the small bias that would appear from five bits of the same type occurring in a row, and this condition would potentially cause lost bits until the DC bias was averaged back out.
PCIe 3.0 uses a completely different encoding, 128b/130b, which does not offer any assurances of DC balance. Instead, a scrambling polynomial is used to try and even out the number of 1s and 0s that get transmitted even when the data prior to being scrambled contains a lot more of one bit than the other. This doesn't make balance certain, but it does enforce strong statistical mitigation against it by making larger bit disparities become increasingly improbable.
However, this still requires that the channel be able to tolerate somewhat looser limits on the potential DC-wander/baseline wander than could potentially occur with the 8b/10b encoding. To compensate for the potential for more unbalanced 1s and 0s over short run lengths, the coupling capacitance had to be increased.
All of this deeper technical rational is merely coopted for use as presence detection, but the differences in the capacitance ranges have no impact on that process.
As for why your card is having problems, I would first ask how exactly you are making the determination that it is failing to progress past basic presence detection, because there are several more phases that need to happen before the card can successfully be enumerated by the BIOS.
Certain hosts have had bugs that can sometimes cause problems if that PCIe slot speed is set to 'Auto' in the BIOS, so if you have an older BIOS, you might try updating it. It is also worth double checking that the BIOS isn't locking that slot to be PCIe 3.0 only which is also possible. In that case, it would obviously only work with PCIe 3.0 capable cards. And PCIe 2.0 cards would fail to enumerate.
After ruling those easier fixes out, I would guess a hardware problem, probably a cracked coupling capacitor or bad termination. Those are ceramic capacitors, and usually they'll be 0402 or 0201 size. These are very easy to crack if you are, for example, doing rework and replacing a coupling capacitor that got ripped off (also easy to do).
If the card isn't a 1X card however, then it would require every single lane be compromised as well, and it seems unlikely that every lane would have a cracked capacitor.
I can't really speculate much beyond that, but I will say for certain that 100nF coupling capacitors are not what is preventing your card from being detected, assuming the host controller isn't somehow compromised or damaged and the time constant isn't getting messed with on that end.
Just put a scope on it and verify the rate the line level changes during those initial detection pulses. if it is within spec for any PCIe generation, then that is not the problem.