Image-driven medical applications can aid medical experts to visualize tissues and organs, and thus facilitate the task of identifying anomalies and tumors. However, to ensure reliable results, regions of the image that enclose the organs or tissues of interest have to be precisely visualized. Volume rendering is a technique for visualizing volumetric data by computing a 2D projection of the image. Traditionally, volume rendering generates a semi-transparent image, enhancing the description of the area of interest to be visualized. Particularly during the visualization of medical images, identification of areas of interest depends on existing characterizations of the tissues, their corresponding intensities, and the medical image acquisition modality, e.g., Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). However, a precise classification of a tissue requires specialized segmentation processes to distinguish neighboring tissues that share overlapped intensities. Semantic annotations of ontologies such as, RadLex and the Foundational Model of Anatomy (FMA), conceptually allow the annotation of areas that enclose particular tissues. This may impact on the segmentation process or the volume rendering quality. We survey state-of-the-art approaches that support medical image discovery and visualization based on semantic annotations, and show the benefits of semantically encoding medical images for volume rendering. As a proof of concept, we present ANISE (an ANatomIc SEmantic annotator) a framework for the semantic annotation of medical images. Finally, we describe the improvements achieved by ANISE during the rendering of a benchmark of medical images, enhancing segmented part of the organs and tissues that comprise the studied images.