I had this exact issue few years back, vlan switch configed wrong.
Since OP doesn't mention vlans I gave some direction not involving vlan's.
One should look at testing MTU.
MTU tests didn't pass in my case, which lead to the switch mis-config.
I do know it wasn't a proxy issue, I didn't have one in the way then or now.
However it doesn't mean it can't be a proxy issue.
However a proxy issue gets ruled out when the OP states
We are not behind a proxy server
However that is not absolute, the user may not be aware of all of the end to end proxies that may be there.
Also if you look closely at reply#15, the yum response indicates there isn't a proxy issue.
In fact it indicates there isn't a proxy, at least not one that anyone would be aware of.
It also help's to know what would happen (evidence) if it was a proxy issue.
Typically at the proxy server, it's drop and log, and destination timers expire with little or no downstream or upstream ado from the proxy server.
It is entirely dependent on the proxy server disseminating each and every packet header for delivery.
If it can't disseminate the source and destination from the packet header, the proxy server simply drops the packet and logs the error locally.
Since the proxy server couldn't disseminate the source and destination the proxy can not report it's error to them, thus the connection is left open with
both ends source and destination waiting for data, which ties up the network resource.
Fortunately at the hardware NIC level there are timers incorporated into the firmware that simply say, no data sent or received in the last 30 seconds so terminate the connect and free the network resource.
At that point the hardware NIC notifies the OS and the OS notifies the software app and the app notifies the user with a "Connection Timed Out" error.
Yum did not report a "Connection Timed Out" error.
Yum did report "Header is not complete" which could have many interpretations, malformed, corrupt, incomplete.
Which is a clear indication that data (packets) did traverse the end to end connection.
However when the packets arrived they were not readable, so yum cannot complete the local task at hand.
Now we have a clear indication that received packets are not readable.
Well what about the other (source) end.
Yum sent a request packet for a file, the source returned the request with a file not found 404 error.
Wait a second, we know the file is there, we verified that, we know yum has the correct file name, nobody else is having the problem with yum or the rpm that tells yum what to request.
So what's the problem.
Somewhere in the end to end communications the file name is being changed or is not being read correctly at the source end.
So we now know send and received packets are an issue in this case.
Your link Ciao also reveals....
Yum isn't doing anything wrong here. The answer in the case of "Header
is not complete" is "fix your network".
Exactly...so how do you "fix your network"?
Well you could read the "Rednecks Handbook of Network Repair".

Or we can understand what is happening or what has happened.
Hacked up headers (incomplete headers) are caused when packets are incorrectly sliced and diced up traversing a network route.
A packet must traverse a network from source to destination intact @ the MTU of the largest Layer 3 PDU.
Maximum Transmission Unit (MTU) refers to the size (in bytes) of the largest protocol data unit PDU.
If MTU is not matched at both ends of a connection then packets will get sliced and diced up differently at each end of the connection.
So instead of each end of the connection slicing packets between each packet as they should, packets are being sliced and diced somewhere within the packet, either in the header or the payload (data) regions of the packet.
And Yum in it's infinite wisdom, is telling us that.
And what at each end of a connection controls where a packet gets sliced, you guessed it MTU.
Look at it this way, if you could send a sliced up loaf of bread from one machine to another and each machine needs to separate the slices, don't both machines need to know the length of each slice.
If the machines think the slice length is different, then what might the output of those machines look like.
Pretty much a mess at the end of the day, wouldn't you say??
Well that's exactly what is happening here.
Without any response from the OP to my questions it's hard to determine where the problem lies.
My guess is, he is DSL PPPoe/a and his ISP is MTU 1492 Layer 3 PDU.
If SME is behind a firewall (which the OP has stated) then SME will default to 1500 MTU and headers
will get hacked/sliced/diced up.
Ok... not the first header, but all subsequent headers after an EOF packet will.
That evidence has been presented by the OP already in reply#15.
Yum downloaded the rpm and then choked on the deps.
You don't get much better evidence then that....that MTU, is a possible issue and not necessarily the cause.
And MTU must remain consistent/correct/matched end to end throughout the connection time period during transfer.
Also the first 28 bytes of the packet contain the header and the header is being hacked up incomplete in this case.
Here's a clue from reply #15..... php-4.3.9-3.22.15.i386.rp
See the problem, the missing "m".
That is why you get the "Header is not complete", because it's been hacked up in the PDU transfer.
Sure it would be nice if it said "Header is not complete -- Check MTU Settings".
But it's not always MTU
settings that cause the header to be hacked.
It could be the Network Card hardware, drivers, firmware, software, OS....that precipitate inconsistent MTU within a transaction and hack the packet up.
Or in my case it was a vlan switch config issue.
Take your pick.
However most of the time it's the MTU settings, and one can assume with some assurance it's the settings, if there is little professed experience and/or knowledge.
Which should only serve as notice to those with experience and/or knowledge to provide appropriate help.
However if you don't check MTU, all that doesn't matter, does it.
You have to start your diagnosis somewhere and I would say, ping MTU testing is a real good place to start.
You can ping MTU test at the server, a client and the firewall if you have access.
That simple little test and comparing the results just might narrow down where the problem lies.
Everything on the network must be set to the same MTU throughout the network segment as well as your provider.
If the OP in this case responds Cable and not DSL then ping MTU testing will need to be done for sure.
Why, because Cable MTU is 1500 Ethernet and most local networks default to that setting, which would rule out a settings issue, however not an MTU issue.
In any event, proper diagnostic testing must ensue to achieve the solution.
My bet is ISP is 1492 MTU PPPoe/a and SME is set up 1500 MTU.
Surely a better bet then a proxy that isn't there or bad links to test with.
Here's an online tool that can help in the diagnosis.
http://w3dt.net/tools/mturoute/HTH