Back to » Love the way you work.

Welcome to the oDesk Community! Connect here with fellow clients, contractors, and oDesk staff. Please review our Usage Policy.

feedback on test

We need to be able to see what questions were missed on qualification tests in order to learn and also to be able to possibly point out ambiguous or incorrect questions.

Vote Result

Score: 8.5, Votes: 11
Quite true. Also, I just

Quite true. Also, I just took the HTML 4.01 test, and found that the grammar on the questions is embarrasingly bad. Perhaps a native English speaker should check over the questions (I'd be happy to volunteer).

And it gets worse...

I have taken several other tests since I posted last comment. The grammar and proofreading on all of them is dreadful. There are even obvious typos in the sample code!

The worst, though, was on the Windows NT test, where a question asked how long a "resourse" [sic!] would exist. There were several time values given as answers, but no units -- the question could have been referring to milliseconds or hours, or anything in between.

Having tests is a great idea, but someone really needs to edit them to a higher standard if they're going to be taken seriously. 

Not only that

From a buyers' perspective (which I am), I'd like to have more information available such that I can evaluate a provider based on their test score.  Without an extra level of detail, it's difficult to even use the test as a basis.  I would have to provide my own test in the interview process.

At a minimum, I would like to see the questions asked in the test such that I can get a better idea of how difficult the test questions are.  Possibly even I'd like to see the answers which the provider gave, but that might be too much information.

For example, I'm evaluating a candidate now who has taken the Java test and would like to know why they only scored 33% on classes, 0% on inheritance, but got 100% on threading.  Without getting more detailed information, this candidate would not get the job because of the low score on what would be the basic skill for the technology.

Was there only one question asked on threading and the candidate happened to guess the right answer?  Or, was there enough questions asked that it's obvious the candidate really understands the topic?  I don't know because I don't have access to the test itself.

It's possible that the test is very hard, or possible that the provider did not understand the question correctly.  I could evaluate the candidate better if I had more detailed information, and make assumptions about their level of understanding of a particular skill.


Hello HEI-USER, Considering


Considering that every time someone takes a test, a new list of questions is put together (so that when re-taking the test, a test taker doesn't get to answer the same questions), what would be the most convenient way for you to assess the test and the provider's performance?

Would you like to be able to take  a (sample) test yourself?

Would you like to be able to see the test history (list of Qs and As the provider gave) for each provider/each test?

Would seeing 5 sample questions and possibly 5 answers the provider gave to these questions do the trick?

It is in our best interest to allow you to use oDesk test results to find a provider most suitable for your projects, but we do need more of your input to implement this successfully.

Olga Kudamanova

Provider Operations Manager


... what would be

... what would be the
most convenient way for you to assess the test and the provider's

I'm not entirely sure as I haven't thought it through completely.  All I know is that having the raw score isn't enough to do anything for me.  I see some people with high nineties on a certain test, and others with 40-50%.  But, according to the other information, it looks like their skills would be comparable. 

That means that either a) one provider is lieing on their skill levels, or b) the tests aren't very fair or accurate.  From what I can tell, I'm suspecting b).

Would you like to be able to take  a (sample) test yourself?

That would probably be useful, in that it would give me an idea about the complexity and/or level of the test.  However, obviously since you randomize the questions, I can't be sure that all the questions on the test are fair.

By randomizing the test questions from a large question pool, you're losing the ability to compare provider to provider.  If I knew that every provider who takes the Java test gets asked the same questions, then I know I can compare the two candidates by their test scores.  However, if they each received different questions, then I can't compare apples to apples and there's no way to know that one candidate just didn't end up with all the harder test while another candidate ended up with the easier test.

Would you like to be able to see the test history (list of Qs and As the provider gave) for each provider/each test?

Yes, that would be ideal from a buyer's perspective.  However, it might cause some concern to your providers (potentially revealing too much information)?  I don't know, you'd have to ask the providers if they're comfortable with this.  OR, even better, provide an option on the test that the answers can be viewed by buyers, maybe after they've been selected for interview or something.  That way providers can decide if they want their answers revealed or not.

Would seeing 5 sample questions and possibly 5 answers the provider gave to these questions do the trick?

Possibly, though I don't know how this would be different than seeing all of the answers.

Probably even better, it would be nice to see the questions which the provider got wrong.  That way, I can see if the answer seemed tricky and if the test taker possibly provided a reasonable answer.

Again, maybe this information is available only after the candidate has been selected for an interview.  Or maybe it's only available if the provider allows it to be.

It is in our best interest to allow you to use oDesk test results to
find a provider most suitable for your projects, but we do n
eed more of your input to implement this successfully.

I know this is completely a different idea, but in my mind it would be really nice to be able to create a test for the provider to take.  I could look through the pool of all questions and create a specific test per job opening.  That way, I can test across all the technologies that I require for the job.

For instance, with the job I'm currently fielding, it would be ideal to have some sort of test which measures: Java basic, Java Servlets, Hibernate, SQL, J2EE, HTTP, JFC/Swing.  If I could put together a test which I could ask questions across all of these categories, that would be pie-in-the-sky ideal for me. But, I know that's a different idea entirely.

Also, while I've got your attention, just reading these forums, it's evident to me that the test taking procedure is flawed.  Specifically, the inability to go back and correct answers is a huge testing methodology mistake.  A good test will always allow the taker to answer any question in the order which they want.  The test will yield more accurate results this way.  The inability to go back and change answers before the time has expired is a fatal testing methodology.  It's likely one of the biggest reasons why test scores are relatively low across all test takers.

One final thought.  I'm not sure how it works, but I know if I were a provider and I took a test and got a low score on it, I would probably rather not have that test score revealed as it would likely be more detrimental.  I've seen low scores on a lot of tests taken by providers, and it's hard as a buyer to not be influenced by that test score.  Again, because I can't get any feedback on why the test score is so low, I can't tell if the candidate really did perform badly, if the test was written poorly, or what.  If I tested low, I know I would either want a chance to retake the test or to not have the score revealed to the buyers.  If the providers don't get this chance, it might very well ruin their ability to get any work at all.  Eventually they'll log off and never come back.

I think one of the best ideas discussed above is to give more options to the provider to allow them to choose whether or not their scores are revealed, and whether or not their individual questions/answers are revealed.  That would give the most protection to the providers and the most transparency to the buyers.

Thanks for reading.

Thank you very much for this

Thank you very much for this detailed answer!

First of all, let me confirm that currently, the Providers can both make their test scrore private (that is choose to not show it in their Profile) and re-take a test after a period of time (normally after 30 days). 

The test score is visible in the profile by default, so a provider needs to take an action to hide them, but the possibility is right there for them.

The 30 days waiting period was introduced to make sure the Provider doesn't abuse the  system and studies some before re-taking the test.

As far as your suggestions go, let us digest them for while and get back with what can be implemented and the best way of doing so. Again, thank you so much for taking the time to think them over and post here. It is immensely helpful for us.




Add to Olga's comments...


To add to Olga's comments:

 1. The reason we have disabled a test-taker to go back and edit  previous answers is to prevent cheating. We dont want providers to be able to "'park" the questions they dont know the answer to, and then google around for the answers and come back and answer them. This helps to preserve the integrity of the test scores.

 2. The test randomization also helps in preserving the integrity of the tests. Since no two instances of the test are the same, a single provider cant improve his/her test score by taking it repeatedly, nor can he/she help his friends by distributing the questions to them. The testing engine makes sure that the "difficulty level" of 2 instances of the test are relatively the same. So you should be able to make an apple-to-apple comparison between 2 providers who have taken a test.

Please take a look at the FAQ and the policies for online testing program here: online_testing_overview *link updated by Jacqui P

4. It's important to note that the tests and the scores should NOT be the only criteria you use to make hiring decisions as a buyer. Yes, it should definately be one of criteria to evaluate whether the provider really posesses the skills he/she mentiones on the profile. But other criteria such as the provider's feedback rating, the cover letter, your interview with him/her...those criteria should also play into your decision making.

5. The ability for buyer to create a "cocktail" test of his/her own choice is something that is definately a possibility we can consider in the future. Also, the ability for a buyer to take tests, right now only providers can take tests.

If you would like to take the tests yourself in the mean time, please sign up as provider here:

You can take any of the tests after you get  your provider account.



oDesk Corp. 



I agree.  



PHP4 test

It's obvious some of the questions were used as padding, and could have been put into the advanced PHP test.

Some of the questions refer to very weird and very obscure functions and operations that are so specific that 99% of all PHP programmers are never likely to be used except in highly specific scientific applications.

These questions made the test unnecessarily hard. I am not a sore loser, I passed the test, comfortably, but some of those questions were inappropriate for a "general usage" test like it is implied to be, especially with a "Advanced PHP" test available.

We are striving to improve

We are striving to improve the tests to make them as useful for our users as possible. Did you have a chance to provide feedback at the end of the test? The feature was developed specifically to allow our users to share their concerns about the tests with us and help us improve them.



Another suggestion about tests

Hi Olga, 

 A huge benefit ODesk could provide to people is resources on where to study and learn the information for improving their skills.

For example: Email etiquette. I did not do so well on that test. Yet, as an experienced technical communicator, my former place of employment (10 years) always sent their critical business emails to me. I put out a lot of fires. When someone had a problem with understanding and responding to an email - I was the expert.

So we need to understand what standards you are relating to, so that we can understand them better.

Another example: Graphics in MS Word. I scored very low on this. Why? Because as a technical writer and expert Word user, the ONLY thing I do with graphics is insert them (preferably linked, if the client can handle this). I NEVER create graphics or callouts in Word. There are many sound reasons for this from a quality documentation perspective.

Knowing Word includes what features NOT to use... I've taken several Word test, not just for you, but for hiring agencies. The fact of the matter, I begin to think that the person writing these tests has NEVER had to use Word professionally!

Finally, tests cannot measure a person's expertise in DOING. If I was asked to explain the VBA code that I use in my templates, I could not do it. I am not a programmer. I understand that a certain function enables a certain action, and I use the code. I know how to look at code and pull out what I need to make my own macros function better. So, am I able to create great templates for my clients? Yes. Can I explain the code I used? No. How can that be tested? I don't think it can.

Thus, if you have a syllabus for tests, this is not enough. We need to know what type of information you want us to learn and where we can study and learn it.

I hope this is helpful. I did enjoy taking the tests. I do believe many questions are ambiguous and poorly written. But it does not change the fact that I did learn where there are gaps in what people think I should know. But now I am frustrated because I do not know where to research that information.


I can't remember if I left

I can't remember if I left feedback on the PHP4 test. I did the C Programming Test last night, and found numerous typos, a few syntax errors and one question (one which only offered a single answer) seemed like a trick question.

The sample code had a syntax error so it would not compile anyway, and down the bottom were the answers I could choose. One of them would have been right if the code was correct in syntax, and another answer option was "None of the above" or something like that. So technically, the code was wrong, but I couldn't figure out if that was deliberate or not. If the code was wrong, and it wouldn't compile, then it would have been "None of the above", but if the code was supposed to be correct, the answer would have been one of the other offered options. I had wondered if that was a loaded question to catch people off guard.

I did report nearly every problem I saw. Some I just shrugged off and in the end left detailed feedback after the test was finished that the test needed an overhaul.

I passed it in the end, and am ok with the score, but will be watching out next time.