Dear All,
I would like to bring to your attention a situation that is, quite frankly, keeping me up at night.
The session in question was already a retake, granted after a previous attempt was invalidated due to a technical bug in the exam lab environment.
Unfortunately, this second session was again affected by technical problems: some keys on the keyboard were unresponsive. Even after replacing the device, the issue persisted, severely affecting my concentration and leading to some careless mistakes.
Despite that, I was confident I had reached at least the minimum score: I attempted all the tasks, some completed perfectly, others partially, but none left untouched.
Instead, I received a final score of 195 points.
Reviewing the breakdown of scores, I found:
Manage basic networking: 0%
Understand and use essential tools: 44%
Operate running systems: 67%
Configure local storage: 100%
Create and configure file systems: 100%
Deploy, configure and maintain systems: 88%
Manage users and groups: 100%
Manage security: 100%
Manage containers: 0%
Regarding containers, I wasn’t able to finish the entire task, but I did log in and pull an image — that’s at least 20% of the objective, and once again, it depended on a functioning network. Still: 0%.
What troubles me most, though, is the 0% on basic networking.
The network was fully functional.
All these tasks — evaluated successfully — clearly demonstrate the network was up and running:
For Task 1 (network):
- Successful ping between node1 and node2
- Creation of the repository file via the dnf command, with a correctly uploaded “.repo” config file from the given server
- Autofs mounting of a user home directory via NFS from a remote server
- Correct NTP service configuration, with the server highlighted and preceded on my client by ^* (fully synchronized)
For Task 2 (containers):
- Access to an external registry server using the provided credentials
- Successful container image pull
(Both of these again clearly depend on a properly configured and functioning network)
I was told one of the five parameters was not correctly set (possibly the netmask, though I was not given a chance to verify it).
Even if that’s accurate: 1 out of 5 parameters does not justify a 0% score. That would imply 80% of the task was done correctly.
A 0% score suggests either the task was not started or was completely incorrect , yet the functional outcome directly contradicts that.
This decision seems completely inconsistent and illogical. I’m not asking for anything I didn’t earn — I’m only asking that the facts and results be acknowledged and reflected in the score.
I’m currently requesting an official review. If the points for networking and containers had been awarded even minimally, based on the successful completion of relevant parts, my exam result would have been more than valid.
I was quickly offered another retake, but to me, this feels like a way to avoid addressing the underlying problem, a flawed assessment inconsistent with the evaluation logic applied to the rest of the exam.
This compromises the credibility of the entire scoring system because, in order to avoid being questioned, it sacrifices fairness, transparency, and accountability towards each individual candidate.
So I ask:
Can a fully functional network be scored 0% because of a single missing parameter?
And what about all the related work that depended on it, where did that go?
Thank you for your attention.
So one thing to keep in mind, some pieces of an objective may not truly have a score or "points" assigned. You are assuming things are weighted equally like a test in school that is multiple choice or something like that. So with 20 questions, the weight of each question is 5 points to get 100%. That isn't really how exams work.
Another point, while networks "can" work in getting from point A to point B, routing and netmask do matter because a /24 (255.255.255.0) newtork is much different than a /22 network based on what addresses are in range. So while this doesn't seem like a big deal to you and think you should get partial credit, in a real world scenario this is wrong and could cause multiple different failures in a real networking environment because gateway addresses, subnets, and routing could all potentially be messed up.
As @Chetan_Tiwary_ mentioned, you can always contact the exam team for a re-review of the results. Everything you do is captured and can be analyzed. I will tell you we do not give detailed results on purpose and we also don't discuss questions and answers for content on the exam as that is a violation of the NDA.
Another example I've used with some people who have complained about the results and partial credit is I make up a pretend scenario ...
Pretend you are a new employee and are getting paid from a brand new tech company. You completed the hiring form and HR has sent everything to a system administrator to setup you account in the system. They create your account with a username and password, put in your banking information, and everything else. However, there is an issue with the routing or account number .... the admin knew how to add you to the system, but the information was slightly incorrect. This is an all or nothing situation in that because the information was wrong, you aren't getting paid.
So while I can't explain anything on weight and how/why you received the percentages you did, think of the above scenario ... the admin got your name right, they got the username and password right, the got your address right, but the bank routing number was a digit off, and the account number was a digit off. You didn't get paid, but hey ... that is 6 pieces, but the last 2 parts had multiple digits, so the routing number is 8 digits and they missed one, and the account number is 10 digits and they missed one.
So they were 87.5% correct on the routing number, 100% correct on name, username/pw, address, and 90% correct on the account number ... the true objective was to get you paid, because of what was in error, you don't get paid ... so the objective was not met, so you would get a 0%.
As @Chetan_Tiwary_ mentioned the official review must be requested https://rhtapps.redhat.com/comments but again, the results will be reviewed and can be corrected (if and only if something was incorrect), otherwise the scores will stand and you likely will not receive any additional feedback as to why/how you received the score that you did.
@StefanoM for specific feedback on exam questions or grading or your exam experience the only way to provide feedback or get questions answered is through the official certification comment form,
https://rhtapps.redhat.com/comments
Since you have already raised it, please wait for their official response. You can reply to the mail with your followup and you will get a response soon.
Thanks for your understanding!
So one thing to keep in mind, some pieces of an objective may not truly have a score or "points" assigned. You are assuming things are weighted equally like a test in school that is multiple choice or something like that. So with 20 questions, the weight of each question is 5 points to get 100%. That isn't really how exams work.
Another point, while networks "can" work in getting from point A to point B, routing and netmask do matter because a /24 (255.255.255.0) newtork is much different than a /22 network based on what addresses are in range. So while this doesn't seem like a big deal to you and think you should get partial credit, in a real world scenario this is wrong and could cause multiple different failures in a real networking environment because gateway addresses, subnets, and routing could all potentially be messed up.
As @Chetan_Tiwary_ mentioned, you can always contact the exam team for a re-review of the results. Everything you do is captured and can be analyzed. I will tell you we do not give detailed results on purpose and we also don't discuss questions and answers for content on the exam as that is a violation of the NDA.
Another example I've used with some people who have complained about the results and partial credit is I make up a pretend scenario ...
Pretend you are a new employee and are getting paid from a brand new tech company. You completed the hiring form and HR has sent everything to a system administrator to setup you account in the system. They create your account with a username and password, put in your banking information, and everything else. However, there is an issue with the routing or account number .... the admin knew how to add you to the system, but the information was slightly incorrect. This is an all or nothing situation in that because the information was wrong, you aren't getting paid.
So while I can't explain anything on weight and how/why you received the percentages you did, think of the above scenario ... the admin got your name right, they got the username and password right, the got your address right, but the bank routing number was a digit off, and the account number was a digit off. You didn't get paid, but hey ... that is 6 pieces, but the last 2 parts had multiple digits, so the routing number is 8 digits and they missed one, and the account number is 10 digits and they missed one.
So they were 87.5% correct on the routing number, 100% correct on name, username/pw, address, and 90% correct on the account number ... the true objective was to get you paid, because of what was in error, you don't get paid ... so the objective was not met, so you would get a 0%.
As @Chetan_Tiwary_ mentioned the official review must be requested https://rhtapps.redhat.com/comments but again, the results will be reviewed and can be corrected (if and only if something was incorrect), otherwise the scores will stand and you likely will not receive any additional feedback as to why/how you received the score that you did.
Thanks for your reply: in just a few lines you've given me all the information that the various support teams haven't dared to provide in several weeks...
Unfortunately, for me, this exam is becoming a real undertaking. It began in April with two systems affected by a bug (known and left there), a retake also marred by technical malfunctions (which caused me constant interruptions and poor concentration), and now this misunderstanding. Allow me this brief digression, but in reality, all of this, in addition to various losses in terms of time and work that no one will ever repay, conveys to me negligence and a lack of attention to the tools, and a truly lack of clarity and transparency in everything else. Why not provide this information clearly on the portals? It would avoid countless discussions and a lot of unnecessary work for everyone. In the end, however, there will be no understanding when faced with 0 or 1, black or white, while the real world we're referring to is necessarily made up of shades of gray, for a thousand reasons.
That said, since I don't know the grading system, which is probably much more complex than you might think, I simplified it because it's the only thing I can do. But if I see percentages assigned consistently, even on tasks where I know exactly what I did wrong, forgot, or left incomplete, the message is that the good work done is still being evaluated. After all, an exam is passed with a 210/300, which means that 70% of the activities are completed, and among these some at 100% and others with percentages assigned. 0%, from my direct experience of three sessions, is an argument I haven't even begun, and is completely inconsistent with objective reality.
Your example is instructive and indicative, but in my opinion, my situation isn't quite the same: the user in my example was "paid." Perhaps he was lucky, because even by taking the wrong route (the netmask), he reached the intended recipient. Maybe they didn't receive the "transfer" notification, but the payment was made, the data traveled, even in multiple ways (the various activities performed), and the delivery was made.
Therefore, I believe the 0% grading is unjustified, pretextual and impartial. An assessment tool must always be consistent, for better or worse, across every topic on the exam.
I believe this is the foundation of the credibility and accuracy of any assessment tool. And in my case, that wasn't the case at all.
If the approach and the evaluation criteria were truly consistent and coherent, then at least part of the assessment should be recognized—the part that allowed me to complete various other tasks, just as it was for the rest of the score. And in my case, even a small percentage nullifies all the work that demonstrates having learned and applied all the different topics on the exam.
But I know they'll never change their decision, and it doesn't take much to understand their reasons... Unfortunately, this will lead me every time (if I ever want to continue this journey) to fail to take exams with the necessary peace of mind, because even with commitment and good preparation, everything hinges on a final paper that uses double standards, never fully clarifying the reasons behind certain choices.
Thanks again for your time.
Stefano
PS: "Only dead people and fools never change their minds," said James Russell Lowell.
I see your frustration and totally understand. I will add just a few more pieces to help set your mind at ease. The grading and evaluation is consistent across the exam, but again the weight of questions may not be and don't need to be because some things are worth more points or considered higher value than other items. What are the values and weights, that isn't known to anyone outside the certification team.
In terms of your networking example though, I wanted to offer one more piece of information on why/how it might not have worked. So yes, you tested because you tested from a machine on the same network and traffic could technically get through (maybe you used ping). However, The Netmask tells devices they are on the same network so they can communicate directly with each other, if they need to communicate with a system outside their network, the communication must be routed through a gateway. Subnetting allows dividing a large network into smaller managable networks.
At home one of the things I was using was a 192.168.15.0/24 (255.255.255.0) network. This was fine for the longest time until more devices, IoT things and I began running out of IP addresses. I've now switched to a 192.168.15.0/22 network. This gives me a much larger range. The netmask determines which network one of the systems is on, even though they share the same IP address. I had to make some adjustments on older systems to ensure they got the correct netmask to properly communicate. So while it appeared the older systems were find talking to older systems, they didn't properly work (because of the netmask) on the larger environment.
Lastly, I would like to point out to take care and really read the questions for what is being asked and also where you are performing the work. I had failed an exam because I did work on incorrect systems (by accident). I had checked all my work and thought I had things completed 100% correctly in the instances where I was sure it was right, but unfortunately, even though I did the things right the work was performed on an incorrect system or located in an incorrect directory, or done for/as an incorrect user.
As an exam taker myself, I definitely understand the frustration on the experience and the OCD in me also upsets me when/if I fail an exam attempt, but it is always a learning experience and when I've taken the exam again with a fresh set of eyes, sometimes I see the gotchas, sometimes I don't feel I've done anything different but I end up passing and then I try to reflect on what the questions said and what might have happened.
As an examiner, I've delivered and graded several exams (in-person ... like in the room with the person taking an exam) where an exam taker will ask questions for clarification and we can provide almost no guidance (as you can't help), but in looking at what they are doing you can see the mistakes happening in real-time where something got interpreted wrong or the user is accidentally on the wrong terminal or SSH session (so yes work is being done correctly, but can't be graded). So again, very fine lines in what can/can't be communicated. One of the hallmarks of all the Red Hat certifcations is the integrity of each exam and the consistent experience for all learners in the grading (again consistent in your case a bad experience because points and credit isn't given as expected), but the evaluation is yes/no you get points or credits for these portions of an exam or portions of an exam objective.
As for the suggestion on explanations and being more transparent, this is something that has been discussed before and I do think a nice blog article or write-up would be good explaining our exams and how grading, scoring, and exam integrity work. I've copied in @Lene on here as I think she would be the best person from the certification team to see that effort through.
If you weren't aware already, I would also highly encourage you to watch some of the YouTube videos that Ben created on the Red Hat exam experience. When I was an instructor, I would often show students those videos so they know what the look/feel is of the exam helping them understand what the experience would be like before sitting for their first exam. As you've already taken an exam, those might not be as valuable to you now, but you never know.
Thank you again for your comments.
It's true! The weight of exam questions is not—and should not be—the same across the board, because some tasks are objectively worth more points or are considered more valuable than others. You absolutely nailed it, and I appreciate it. There's always a weighting factor. It's quite obvious that creating a user is different from configuring a network or launching a container with persistent storage, and those tasks will naturally carry more points even at the same percentage level.
But if you look at the results, only tasks that were completely skipped or entirely incorrect get a score of 0%. My networking task fits neither of those cases. So why 0%? What kind of proportionality and fairness is that?
Yes, I’m very familiar with what you’re describing: you’re referring to a newer subnetting system called CIDR (Classless Inter-Domain Routing), which replaces the old class-based system (A, B, C, etc.) and allows for more flexible host management.
If I forgot to include the netmask, it was a slip on my part. Not to make excuses, but these are relatively simple and almost automatic tasks, and the added stress, interruptions, and distractions I had to manage during the exam likely affected my overall performance.
That said, for the reasons we’ve already discussed regarding task weight, this is clearly not an absolute failure—especially when compared to how partial credit is granted in other tasks. That’s the core of the issue: the lack of consistency and fairness in judgment.
Without uniform criteria, the credibility and validity of the entire evaluation system are compromised.
Your comment about writing an article or blog post makes a lot of sense, especially in addition to what was said earlier. If candidates are clearly informed that some tasks might be scored as 0% or 100%, regardless of intermediate steps, it would prevent false expectations.
That would be a truly clear and transparent way to inform candidates, and would likely avoid thousands of tickets on the issue. The fact that this remains one of the most common reasons for disputes clearly demonstrates the lack of transparency and the superficiality with which, from an entirely different internal perspective, it's assumed that candidates are fully aware of what they're facing—not just technically, but also procedurally.
Why isn’t it already like this? These are essential pieces of information, after all.
Put yourself in the shoes of someone taking the exam for the first time: they study, prepare, and practice by following Red Hat's official recommendations, taking the RH124 and RH134 courses and using official Red Hat materials. They certainly can't spend months digging through YouTube, watching hundreds of videos, especially since so much of what is out there is low quality and leads only to confusion and disorientation. Then, when things go wrong, the blame is often placed on the use of unofficial materials, saying: see what happens when you don't stick to the official content if you want proper preparation?
This leads people to believe that if they strictly follow the official path, they will be ready for the exam. Instead, they find themselves facing something more like a game show, where form matters more than substance, despite the supposed focus being on education and learning. It feels like sitting an exam on the exam itself, but without the tools or experience to do so. Do I really have to take the exam three times just to understand the trick? Because that's exactly what it feels like, a trick, if you deliberately keep it hidden. I don't know whether this is intentional, but the result is the same: a massive waste of time, energy, and money for the candidate and their company.
Let me open a brief parenthesis. For Red Hat, it's easy to wipe everything clean in case of errors (and I have personally experienced too many, in two out of three attempts) and offer a second chance. For us, it's a serious problem of time, effort, and above all, missed professional opportunities. It's a damage that costs Red Hat nothing, but for the unfortunate candidate, no one will ever repay that loss. Parenthesis closed.
Back to the exam. You yourself have had these negative experiences. But I'd also add this: the exam questions are often unclear, and you have to believe me, it becomes more frightening not to understand the question than to fail at the task itself. Or worse: to take what seems like the correct technical approach, only to find out from the final score that it was considered wrong. That's precisely what the proctor should be there for. Even in university exams, it's allowed to ask, "Excuse me, I didn't understand the question," just to be sure before beginning. That's where exam integrity starts, with the possibility of clarification, so you don't end up taking a technically correct path that's marked wrong due to a formal misunderstanding.
To conclude, at the very least, this situation should be considered a shared responsibility. It's not fair to assume that everything imposed from one side is automatically right, especially when it's done with poor transparency and a clear lack of fairness in judgment.
I continue to believe I've been seriously harmed, and I'm the only one who bears the consequences directly. This is not just frustration it truly feels like being the victim of an unfair imposition. I do not believe I can accept the easy fallback of a retake (as I've already said, it's far too convenient and costless for Red Hat), because doing so would mean accepting that the fault lies entirely with the candidate, when in reality there's a significant lack of transparency and fairness from those who hold the power and seem to take advantage of it.
I remain convinced that I've suffered a serious loss due to the unfair deduction of at least part of my score, which, by a very small margin, has nullified an enormous amount of work and dedication. I will do everything in my power, including requesting an external audit, to obtain a fair and accurate assessment of what really happened.
This experience with Red Hat certifications has really been a disappointing and unexpected one.
Please forgive my frankness, and once again, thank you for your kind and helpful efforts.
--Stefano
I'd like to add something I hadn't yet highlighted, and which, in my opinion, is far from insignificant.
In a previous exam session, despite not having completed the container assignment, I was given a partial score of 33%. In the most recent session, exactly the same thing happened: same container assignment, same incomplete result, but a markedly different evaluation, with a score of 0%.
The justification for this second evaluation? Failure to complete the assignment, regardless of the steps actually performed, which, I repeat, were accessing the registry and downloading the image.
Leaving aside the issue of evaluation and scoring methods, here we are talking about the exact same situation in two different exams, but with a difference in results that is clearly not fair and consistent. This cannot help but raise serious and legitimate doubts about the correctness and consistency of this choice, which demonstrates a clear inconsistent use of the same rules, in the same situations.
Given these precedents, the same concerns about evaluating the network task are legitimate: here, the approach was to completely discard all the operations performed (4 out of 5) and still assign a score of 0%, despite a perfectly functioning setup that was preparatory to several other tasks performed and evaluated positively.
I remind you that the parameter is almost certainly the netmask (no objective feedback is permitted), and that if the context is that of a real network, even if the hosts are all on the same subnet, the netmask is completely irrelevant. In a certain sense, it wouldn't even be wrong to decide to adapt to the environment made available for the exam, avoiding setting parameters that have no impact on that specific situation.
Let me clarify. If the evaluation rationale states that in a real environment, with different networks, the netmask is crucial—as it obviously is—then the same task should simulate this situation, for example, by placing the servers external to the two nodes in the environment on different networks. Otherwise, an explanation is being offered based on assumptions that, while correct, don't match the lab environment provided, which instead has all objects connected to the same network. And, ironically, the only parameter that isn't crucial in that situation is being considered crucial, ignoring everything else and completely nullifying the score. No, something isn't right with this approach...
Therefore, when faced with similar or even overlapping cases, different criteria were adopted, without a clear explanation. As I've said several times, I certainly don't want to delve into the merits of the evaluation algorithms, out of respect for confidentiality; But when the final results give the impression of following an arbitrary or variable logic, doubts inevitably arise. And these doubts become even more frustrating when every request for clarification comes up against that wall of confidentiality that ends up preventing a real and constructive discussion.
And this is precisely why, in all honesty and without any pretext, I have been requesting a score review for weeks now. It's a fact that on at least two exam points, the grading appears rather ambiguous when compared to other similar situations; this significantly impacted the final result. Without this arbitrary and unjustified omission, considering the examples provided and the work done, I would have already passed the test without any difficulty.
Why should I repeat everything from the beginning, at my own risk and assuming full responsibility? I certainly have my share (otherwise I would have received full scores), due in all honesty also to the negative circumstances in which I had to face this exam; But all I ask is that the work done be given due and proportionate weight, using the same standards used for all other assignments and exams. Consistency, fairness, and transparency, to give things a name. In the meantime, I have the impression that every possible excuse is being sought to justify this behavior, although I can still count on the barrier of confidentiality to ultimately reach a decision that is both unquestionable and tainted by legitimate doubts that will never be clarified.
--Stefano
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.