A novel method P-Tuning is proposed that employs trainable continuous prompt embeddings in concatenation with discrete prompts that stabilizes training by minimizing the gap between various discrete prompts, and improves performance by a sizeable margin on a wide range of NLU tasks including LAMA and SuperGLUE.