Xiaoqian Shen 沈晓倩


Bio: I am currently a Master student of Computer Science at King Abdullah University of Science and Technology supervised by Mohamed Elhoseiny. Before that, I received BSc in Computer Science from Jilin University, China.

Research: I am excited about complex problems that can be tackled with learning-based systems. Currently, my research focuses on generative models, spatiotemporal representation and low-resource learning.

Feel free to reach out to me: xiaoqian.shen@kaust.edu.sa

Resume Twitter Scholar Github LinkedIn

Profile picture

Publications


Project image
MoStGAN: Video Generation with Temporal Motion Styles
Xiaoqian Shen, Xiang Li, Mohamed Elhoseiny
CVPR 2023
Project Page Paper Supplementary Code

Project image
ChatGPT Asks, BLIP-2 Answers: Automatic Questioning Towards Enriched Visual Descriptions
Deyao Zhu, Chen Jun, Kilichbek Haydarov, Xiaoqian Shen, Wenxuan Zhang, Mohamed Elhoseiny
Arxiv 2023
Paper Code

Project image
Exploring hierarchical graph representation for large-scale zero-shot image classification
Kai Yi, Xiaoqian Shen, Yunhao Gou, Mohamed Elhoseiny
ECCV 2022
Project Page Paper Supplementary Code

Project image
Adversarial Text to Continuous Image Generation
Kilichbek Haydarov, Aashiq Muhamed, Jovana Lazarevic, Ivan Skorokhodov, Xiaoqian Shen, Chamuditha Galappaththige, Mohamed Elhoseiny
OpenReview 2022
Project Page Paper Code

Project image
KeMRE: knowledge-enhanced medical relation extraction for Chinese medicine instructions
Tao Qi, Shan Qiu, Xiaoqian Shen, Haopu Chen, Shuai Yang, Hao Wen, Ya Zhang, Yuanqing Wu, Yongfeng Huang
Journal of Biomedical Informatics 2021
Paper

Project image
Image Splicing Location Based on Illumination Maps and Cluster Region Proposal Network
Ye Zhu, Xiaoqian Shen, Shikun Liu, Xiaoli Zhang, Gang Yan
Applied Sciences 2021
Paper

Projects


Project image
MoStGAN: Video Generation with Temporal Motion Style
Mohamed Elhoseiny, Vision-CAIR KAUST, Jedda Saudi Arabia, 2023
  We argue that a single time-agnostic latent vector of style-based generator is insufficient to model various motions and hence introduce additional time-dependent motion styles to model diverse motion patterns. In addition, a Motion Style Attention modulation mechanism is proposed to augment frames with vivid dynamics.

Project image
Affective Visual Dialog
Mohamed Elhoseiny, Vision-CAIR KAUST, Jedda Saudi Arabia, 2022
  We study the role of affective language in the form of conversation grounded on visual stimuli in informing human emotion and collect a new dataset for (1) Dialog-based Question Answering; (2) Dialog-based Emotion classification and Affective explanation.

Project image
Adversarial Text to Continuous Image Generation
Mohamed Elhoseiny, Vision-CAIR KAUST, Jedda Saudi Arabia, 2022
  We introduce HyperCGAN, a conceptually simple approach for Adversarial Text to Continuous Image Generation that utilizes HyperNetworks to condition an Implicit Neural Representations (INR)-based GAN model on text.

Project image
Knowledge-enhanced Medical Relation Extraction
Yongfeng Huang, Tsinghua University, Beijing China, 2021
  We propose a knowledge-enhanced framework for medical relation extraction (RE), which can exploit medical knowledge of medicines to better conduct medical RE on Chinese medicine instructions.

Project image
Self-Supervised Medical Image Segmentation
Zhenghua Xu, Hebei University of Technology and University of Oxford, Tianjin China, 2021
  We proposes a multimodality contrastive self-supervised medical image segmentation method that utilizes a novel domain sharing generative adversarial network to achieve a contrastive domain translation.