Abstract: Programming skills can be trained by doing lots of computational programming exercises. Such practice does not only strengthen the understanding of a programming language syntax, but also developing the skills of problem solving using computers logics. Unfortunately, automated feedback in rectifying problem solving difficulty is costly to be built due to the diversities of questions requirement. Among popular methods to support such feedback are through dynamic testing, solution template and intelligent agents. However, these approaches require additional resources to be prepared in advance of training session. Thus a simple and immediate method to associate feedback with current programming answering difficulty is addressed in this research. It enables each line of computer statements that having semantic mistake to be instantly associated with live experts feedback using program-statement parser. Experiment was done using 793 solution attempts of in answering a computational programming question. The results show that feedback can be quickly provided on specific programs statement. Although the expert's feedback was only provided on 1% of the overall programs attempts, the same feedback was successfully replicated up to 33% of the total programs attempts that contain similar mistakes. This approach can be considered efficient as it does not require feedbacks resources to be prepared in advance on each computational programming question. Furthermore, it provides mechanism to support experts live feedback that can be provided during a lab session into a reusable automated feedback.