I've done it before using a SQL database (as well as on Google's Datastore), but the concept can be applied to Redis as well, it's just not very efficient in terms of storage, but very fast (if you want storage efficiency, use SQL and use the Like keyword, it's a lot slower, but you will only store a key per word)
Take the word petrol, now you store keywords for each letter p, e, t, r, o, l, now run through the word again using pairs of two, pe, et, tr, ro, ol, now run through the word again using pairs of three, pet, etr, tro, rol, now in pairs of four, petr, etro, trol, pairs of five, petro, etrol and finally the whole word, petrol
Now you can split it up into separate "tables" if you want to further speedup the lookups, create a table for single-character lookups, one for double-character lookups etc. If you see only small fraction of the words are more than 7 characters, you can throw all of them into the same "table".
And that should give you very fast lookups at the cost of storing n(n+1)/2 keys where n is the length of the word you're storing.
In terms of BigO notation, you'd get lookups with order complexity of O(1) and space complexity of O(n^2), that's as fast as you can get.
Idea Incubator