Cross-modal perception, or intersensory phenomenon, has been a long-lasting research topic in psychology and neurology; various studies have discovered strong correlations in human perception to auditory and visual stimuli. Despite many existing works in computational multimodal modeling, a large portion of the effort has been focused on indexing and retrieval the multimedia content. Although joint representations of multiple modalities and their correlations are explored in these works, they do not need to model the details of the samples. Here, the focus of our research is on fine-grained cross-modal audio-visual generation that advances the frontier in multimodal modeling. We have developed algorithms in audio-visual source association that are able to segment corresponding audio-visual data pairs; we have created deep generative neural networks utilizing adversarial training that are able to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio. The outputs of cross-modal generation are beneficial to many applications, such as aiding hearing- or visually-impaired and content creation in virtual reality.