They are different.
Scalability, generally, refers to when the expected throughput is several orders of magnitude larger.
For example, if I write the a website which keeps track of my finances, and I want to optimize it for speed, I might decide to keep ALL the data in memory. This means my website can respond more quickly - it does not have to get the data off disk (let's forget, for the moment, what happens when my server is rebooted).
For one user, this might be highly optimized. However, it lacks scalability; when my website reaches 1,000,000 users, keeping all that data in memory is no longer an option.
Most scalability issues come up when a system is divided over several servers.
Things behave differently in large volumes: databases may no longer fit onto a single server; do you shard your database, or do you replicate it? If you split your website over several servers, do you use several identical servers, do you separate static data from dynamic data, do you keep specific large assets on a content distribution network (CDN)? How will that effect your website? Do you use load balancing switches, or load balancing DNS?
What happens if a server goes down? If you have one server, no matter how optimized it is, it means an outage; this is fine for 1 or 5 or 10 users, but for Google? Google, at any given time, WILL have servers down. It is no longer a matter of "a drive has failed, page a sysadmin to fix it", but "add it to the list; the failed drive replacement engineer will trundle his cart around in a few minutes". It might not be worth for Google to use RAID, since they have an "redundant array of inexpensive servers".