You decide a factor by how much you want to increase the size when you increase it, and a starting size. For example, you picked a starting size of 11 and a factor 2 for the increase.
Before you go any further, you write a little program that prints the smallest prime >= 11 ( which is 11), the smallest prime >= 22 (which is 23), the smallest prime >= 2*23 (which is 47) and so on.
Take the output of this program, and add a static array with these sizes to your hashing code, that’s 11, 23, 47, 97, 197 etc. ). When you want to increase the size of the table, and the current size is x, then you find the smallest number > x in your table, and that is the new size.
You may experiment with different increases. For example a factor 1.5 instead of 2.0 will force you to resize the table more often, but may reduce waste of memory. For example if you have 1,000,000 entries then going to 1.5 million instead of 2 million saves 25% of the memory - as long as the 1.5 million is enough.
Some implementations allow giving a capacity when the hash table is created. If you do that, you should assume that what you were told is the truth or close. So if you are told “capacity one million” you would start the table with number of slots such that filling one million entries gives you a decent load factor, say 1.5 million slots or whatever you find is a good number. And you wouldn’t resize until the number of slots is much more than the initial capacity. You wouldn’t resize at 1,000,001 entries but maybe 1,050,000.