Automatic image segmentation is an important tool in medical imaging, saving a great amount of time and costs when compared to manual segmentation performed by expert radiologists. One popular approach to image segmentation is to use deep learning, where convolutional neural networks (CNNs) are used to determine how the image should be segmented. In previous work, the idea of using implicit curves to define the segmentation boundaries in 2D via CNNs was introduced. By making use of tensor-product splines for the implicit function, this naturally equipped the method with a smoothness prior, which is well suited to medical data given the smooth nature of organs. In this work we aim to expand upon the approach introduced above by performing the segmentation on the entire 3D volume. Allowing the model to infer the segmentation boundary in 3D has several advantages, such as avoiding artifacts seen in single slices and building in the smoothness in all three spatial dimensions. Nevertheless, neural networks that perform convolutions in 3D often suffer from issues with memory consumption, simply due to the huge amounts of data that need to be concurrently stored in memory. Here, we exploit a recent approach that tackles this issue known as Dense V-net. The approach is implemented in PyTorch and is evaluated against state-of-the-art methods on various medical datasets, such as CT and MRI, and on segmentation classes of various organs.