In the field of robotic manipulator operations, precise trajectory planning for the end-effector’s position and orientation is crucial, especially in tasks such as grasping a bottle by its neck. This paper presents a novel approach utilizing Reinforcement Learning to address this issue. Specifically, we employed the Soft Actor-Critic and Hindsight Experience Replay algorithm to train a UR5e manipulator in a simulated environment, incorporating a unique design for state, action, and reward. Through comparative analysis with other reward function designs, we found that our trained Reinforcement Learning model generated a more efficient trajectory and achieved a significantly higher success rate. This study underscores the potential of our approach for enhancing the trajectory planning of robotic manipulator operations. © 2023 IEEE.