I'm not sure how banks design their entire system. Also, 5 billion customers is too much for any bank. But that's besides the point.
An interesting tip that I got from a CTO to handle large transactional data such as this was:
Every 10 million rows, and you know you have to shard the table. Until that point, trust the SQL DB to do the right thing, efficiently. In order to shard, pick a time window and create tables (on the fly) which contain data only for that time window. So you could have one table for the week/month depending on your traffic. At query time, you almost always know what date ranges you are looking for. Query the tables that would contain this data in an async fashion. Then combine the results. It's essentially MapReduce but for SQL databases and not much harder to implement.
For other more static data, such as Customers table, most enterprises I've seen shard the data on the value of the id column. They estimate how many users they'll see in the next 5 years and accommodate for those many horizontal shards in the DB.