Dima G This is interesting. I don't understand the details, but it looks to me like they are using formal logic to constrain the vectors during training to create what looks to me like a more structured latent space, if I understand from my quick read of the article correctly.
To answer your questions more directly, I think they are actually using a neural network to train their "hyper-dimensional vectors," which are similar to vectors in the latent space in a neural network, including LLMs. In case you're not familiar, the latent space in a neural network is the high dimensional vector representation of knowledge that flows through the network. Although I say "high," this is actually very low compared to the number of dimensions in the space of possible word combinations, so is extremely compressed. In some sense, this is the job of the neural network: to compress information into this latent space, and then manipulate it.
Their idea seems to be, based on what I read, that they can control and interpret the latent space using formal logic, which seems very useful. It would come at the expense of less information per parameter (since the formal logic constrains the amount of entropy). This seems like a good tradeoff.
I'm not sure I understood the concept very well, but this is definitely an interesting thing to think about!
Thanks for the pointer!