The problem of generating natural language descriptions of images and videos has been steadily gaining prominence in the computer vision community and beyond. It is important for at least three reasons: i) transducing visual data into textual data would permit well-understood text-based indexing and retrieval mechanisms essentially for free; ii) fine-grained object models and region labeling would provide significant richness to multimedia retrieval techniques; and iii) grounding representations of visual data in natural language has great potential to overcome the inherent semantic ambiguity in closed-world recognition tasks. On the other hand, videos, compared to images, contain rich temporal structures and causalities, hence they introduce a new level of difficulties. Here, the focus of our research is on designing efficient, scalable methods to harness the temporal richness in videos by automatically discovering sequences, reasoning their orderings, and summarizing their content in natural languages. We have developed early prototypes using both bottom-up and top-down information to translate short clips to texts. More recently, we have developed methods utilizing deep neural networks to automatically discover and summarize procedures in long untrimmed instructional videos. The YouCook dataset we collected has been widely used in the research community.