Calcium imaging is a key method to record the spiking activity of identified and genetically targeted neurons. However, the observed calcium signals are only an indirect readout of the underlying electrophysiological events (single spikes or bursts of spikes) and require dedicated algorithms to recover the spike rate. These algorithms for spike inference can be optimized using ground truth data from combined electrical and optical recordings, but it is not clear how such optimized algorithms perform on cell types and brain regions for which ground truth does not exist. Here, we use a state-of-the-art algorithm based on supervised deep learning (CASCADE) and a non-supervised algorithm based on non-negative deconvolution (OASIS) to test spike inference in spinal cord neurons. To enable these tests, we recorded specific ground truth from glutamatergic and GABAergic somatosensory neurons in the dorsal horn of spinal cord in mice of both sexes. We find that CASCADE and OASIS algorithms that were designed for cortical excitatory neurons generalize well to both spinal cord cell types. However, CASCADE models re-trained on our ground truth further improved the performance, resulting in a more accurate inference of spiking activity from spinal cord neurons. We openly provide re-trained models that can be flexibly applied to spinal cord data of variable noise levels and frame rates. Together, our ground-truth recordings and analyses provide a solid foundation for the interpretation of calcium imaging data from spinal cord and showcase how spike inference can generalize between different regions of the nervous system.