To address these limitations, this study introduces a hybrid model that integrates a Graph Convolutional Network (GCN) with an attention-enhanced Long Short-Term Memory (LSTM) architecture. By ...
Our model utilizes a pretrained VGG16, processing groups of five slices simultaneously, and features multiple end-to-end LSTM branches, each specialized in predicting one caption, subsequently ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results