The modifications works very well. Thanks for this. Although, i encountered an error while running the lammps in the subsequent cells. The error is:
LAMMPS (29 Aug 2024 - Development - 6d634dc-modified) OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:99) using 1 OpenMP thread(s) per MPI task Reading data file ... orthogonal box = (0 0 0) to (10.862 10.862 10.862) 1 by 1 by 1 MPI processor grid reading atoms ... 64 atoms read_data CPU = 0.001 seconds Allegro is using input precision f and output precision d Allegro: Loading model from ../si-deployed.pth Allegro: Freezing TorchScript model... Type mapping: Allegro type | Allegro name | LAMMPS type | LAMMPS name 0 | Si | 1 | Si
Neighbor list info ... update: every = 1 steps, delay = 5 steps, check = yes max neighbors/atom: 2000, page size: 100000 master list distance cutoff = 6 ghost atom cutoff = 7 binsize = 3, bins = 4 4 4 2 neighbor lists, perpetual/occasional/extra = 1 1 0 (1) pair allegro, perpetual attributes: full, newton on, ghost pair build: full/bin/ghost stencil: full/ghost/bin/3d bin: standard (2) compute rdf, occasional attributes: half, newton on pair build: half/bin/atomonly/newton stencil: half/bin/3d bin: standard Setting up Verlet run ... Unit style : metal Current step : 0 Time step : 0.001 Exception: expected scalar type Double but found Float Exception raised from data_ptr at aten/src/ATen/core/TensorMethods.cpp:18 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::cxx11::basic_string, std::allocator >) + 0x6b (0x7b2b5fe3b0eb in /content/libtorch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const, char const, unsigned int, std::cxx11::basic_string, std::allocator > const&) + 0xce (0x7b2b5fe36abe in /content/libtorch/lib/libc10.so) frame #2: double* at::TensorBase::data_ptr() const + 0x10b (0x7b2b9fe1950b in /content/libtorch/lib/libtorch_cpu.so) frame #3: at::TensorAccessor at::TensorBase::accessor() const & + 0x55 (0x5aa5d7322075 in ../lammps/build/lmp) frame #4: + 0x5a0eb0 (0x5aa5d732deb0 in ../lammps/build/lmp) frame #5: + 0x2a12a2 (0x5aa5d702e2a2 in ../lammps/build/lmp) frame #6: + 0x219511 (0x5aa5d6fa6511 in ../lammps/build/lmp) frame #7: + 0x10676d (0x5aa5d6e9376d in ../lammps/build/lmp) frame #8: + 0x106b6e (0x5aa5d6e93b6e in ../lammps/build/lmp) frame #9: + 0xf5e71 (0x5aa5d6e82e71 in ../lammps/build/lmp) frame #10: + 0x29d90 (0x7b2b5f1d4d90 in /lib/x86_64-linux-gnu/libc.so.6) frame #11: __libc_start_main + 0x80 (0x7b2b5f1d4e40 in /lib/x86_64-linux-gnu/libc.so.6) frame #12: + 0xf75c5 (0x5aa5d6e845c5 in ../lammps/build/lmp)
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them.