New STS Plate Penetration Program HCWCALC.EXE
Most of the material used here is from the files and reports of Dr. Allen V. Hershey and associates of the Computation and Ballistics Department, US Naval Proving Ground, Dahlgren, Virginia. He led this group since early in WWII. After WWII, his group wrote a large set of inter-related Technical Reports completed in 1955 - the year that naval armor as a separate subject ended and all further work was turned over to the US Army - on homogeneous, ductile armor (the very similar US Navy WWII BuShips Special Treatment Steel and BuOrd Class "B" armor) attacked by various steel Common and Armor-Piercing projectiles, in service and experimental, with and without AP caps, Hoods, and windscreens, and with various nose shapes, attempting to cover the general subject in as wide and thorough a manner as possible. To a large extend they succeeded, at least in showing ways to approach the subject where their results were incomplete.
Dr. Hershey also created a five-box set of 3"x5" index cards annotated with test results that he built up over the entire time of his being the head of the above NPG group, including many tests done before he came to NPG that he felt were important for this analysis effort. This box set contains thousands of ballistic tests of standard homogeneous, ductile armors of various grades, face-hardened armor both modern (WWII) and older (circa WWI and even before), and other kinds of steel. Many different kinds of projectiles are also represented in this database. He used it extensively in creating the documents mentioned above.
When Dr. Hershey retired in 1981 he sent me the index card set and a copy of all of these reports, declassified. When added to my existing database (US Navy Ordnance Pamphlet OP ORD 653, ARMOR PENETRATION CURVES (1944), US Navy OP 8, ARMOR: QUALITY, TEST, & PENETRATION (1923), German Navy G.Kdos. 100 (Secret Command Document 100) (1940), and so forth), I have a very complete library of data on this subject, which I am slowly using to incrementally create a final computer program on homogeneous, ductile armor (at least on one basic type) that includes all effects that I feel are important. I already have something close to this in my face-hardened-naval-armor FACEHARD 6.9 program. Homogeneous, ductile armor is demonstrably much more complicated than face-hardened armor, which is why it has never been completely solved, even though it has been worked on extensively since the time of the US Civil War, 150 years ago! Now THAT is a "non-trivial" problem!!
The current document used some of the index cards and other sources and is based on my previous M79APCLC.EXE computer program for homogeneous, ductile STS/Class "B" armor versus the WWII 15-lb US Army 76mm (3") M79 AP Shot projectile (bare nose with a tangent ogive arc-of-circle nose with a Radius of Ogive of 1.66 caliber (5")) and scaled models thereof. This was Dr. Hershey's main "Projectile Primary Standard" in his study, with other projectiles and other armors compared to this projectile fired against the typical 115,000 psi tensile strength 225 Brinell STS/Class "B" armor used extensively by the US Navy during WWII (other grades of armor with different hardness values and tensile strengths were also used, but one has to start somewhere!). Dr. Hershey's documents contained most of the data and formulae used to create this program.
My only addition to this database used in the program was the use of the Percent Elongation to cause increased scaling for projectiles over 8" in diameter against plates with this value under 25%, such as German Krupp WWII Wotan Härte (Wh, "Odin Hard" naval armor) which only had this value at 18% in its manufacturing specs (these were captured by the US after WWII and I have a copy). Indeed, when I noticed this scaling effect in the armor penetration curves from G.Kdos. 100 - German Krupp late-model WWII APC projectiles 8", 11.3", 14.96", and 16" ("Psgr.m.K. L/4,4") were virtual scale-models of one-another and gave a perfect set of tests with changing projectile size, but nothing else - I was originally at a loss as to why Wh should have this effect, which did not occur for shells up to 8" in those same tables (the smaller shell penetration curves matched US and British calculated results almost perfectly). The 25% value was used by WWII US Navy STS/Class "B" armors and WWII British Non-Cemented Armor (NCA) and this seems to be high enough to eliminate this effect (ductile, stretchable wrought iron would also have this value even higher, though a higher value gives no benefit here). Study showed that the only parameter that the US, British, and German armors differed appreciably in was this Percent Elongation value; the rest were almost identical in all of these armors. While it is possible that some factor that nobody has ever measured in steel is causing this poor showing against large-scale projectiles in Wh armor only, I really do not think so in this case, since armor resistance tracks reasonably well all of the other parameters measured, even those tests used back in WWII (steel metallurgy has improved, but most parameters measured are still the same, just with a different understandings as to how important they are has shifted over the years after WWII). Therefore, I added a term to M79APCLC to use the ratio of the actual Percent Elongation of the steel divided by 25% to adjust a linear scaling term that gets bigger as this ratio gets smaller and that has worse and worse results as projectile size increases, on top of the small scaling effect used by the projectiles in M79APCLC when the projectiles were under 8" or when the Percent Elongation was 25%. I scaled the 18% value to match the curves in G.Kdos. 100, since I assumed that the Germans knew what they were describing and that these results were from actual tests and thus were reliable. Most other modern armors have this value closer to 25%, if not actually 25%, though a few armors and most non-armor steels have this value sometimes even less than the 18% used for Wh. I am at a loss as to why Krupp would do this, since tests have always shown that Wh-type armors hit by large, slower-moving AP projectiles at high obliquity, as would be the case with deck armor, rely strongly on their ability to gradually stretch to cause the projectile to skip off the plate after making a long groove, constantly pushing up on the nose the whole time. An armor plate that fails to stretch enough prior to splitting apart will be inferior in this regard - the very poor showing of rigid face-hardened plate under these conditions reinforces this big time!
Dr. Hershey would not have noticed such an extra scaling effect because (1) he used US armors that had the maximum Percent Elongation of 25% so the effect was not there in most US tests in the first place, and (2) most of his tests were for small-scale projectiles (0.6" through 6", mostly around 3"), where the very expensive large-projectile test results, when not just from projectile and plate acceptance tests at artificial test velocities not really showing the plate or projectile full capabilities (only tested against the minimum allowed), were only those for limited purposes (definitely not this one!) and those few tests where this might have been evident were "lost in the noise".
The source documents I used most in this new program were US NPG Report #1211, "The Windscreen Effect, the Hood Effect, and the Cap Effect" (6 June 1955) and US NPG Report #1120, "The Construction of Plate Penetration Charts or Tables" (26 May 1955). The M79APCLC program has used and future upgrades to this program will also use NPG Report #4-46, "BALLISTIC SUMMARY - PART II: The Scale Effect and the Ogive Effect" (1 April 1946) among many other documents and those index cards.
This program is an upgrade of my previous MS-Dos-based QuickBasic 4.5 M79APCLC.EXE US Navy WWII homogeneous, ductile nickel-chromium Special Treatment Steel (STS) or Class "B" armor penetration program. The ZIP file contains the HCWCALC.EXE program and the BRUN45.EXE Basic Support Program that has to be in the same folder (the program was built and compiled using Microsoft MS-DOS QUICKBASIC 4.5 Interpreter and Compiler Application Suite), the two Source files HCWCLCMN.BAS (Main file of the program) and HCWCLCS1.BAS (several subroutines that didn't fit in the Main file) that are linked when compiled, the HCWCLCMN.MAK file that tells QUICKBASIC that the two Source files must be loaded together into the QUICKBASIC 4.5 Interpreter Application Program when the Main file is selected, and this text file describing things. HCWCALC.EXE will work as-is in some versions of Windows, but it requires the Windows shareware utility program DOSBox or similar MS-DOS emulator to create a special MS-DOS "sandbox" Window to run it under Windows 7, along with a Shell program (DOSBox loader utility) such as Loonies Software DOSShell Version 1.8 (you put the MS-DOS program icon and some path information into the Shell for each of your MS-DOS programs and when you run the Shell it shows the icons of these programs and you load and run the programs just like any other small Windows folder by double-clicking on one of these icons, at which point it starts up DOSBox and runs your program in its Window). Currently DOSBox is at Version 0.74 and it works fine for MS-DOS text-only programs like mine in Windows 7.
The program initially displays the title and the first page of several of a general discussion on its operation. The user can step through all of them to read them (enter "Y" to the question at the bottom of each page to read the next one in succession) or skip them to immediately begin his entries (enter "N" to the question). At the end of the last discussion page, any entry goes to the begin-entry section.
If the original US Army WWII 76mm (3") M79 AP Shot (bare-nosed) projectile nose design (tangent ogive with a Radius of Ogive of 1.66 calibers (5" for the 76mm size)) calculation answers are all that are desired, merely make sure that the windscreen, Hood, and AP cap weight percentage value entries are all zero, skipping their added logic. This reverts back to my M79APCLC program computations unchanged.
Enter the projectile diameter in inches (default = 3 (76mm)). (Hitting ENTER with no entry will revert to the last legal number input for the current quantity the last time through the program with no reload, here the "canned" 3" initial value. If no previous legal entry and no canned default for that number, the user has to make a legal entry and then hit ENTER to continue in the program. Always answer "Y"/"N" questions - no defaults for these.)
Enter the total projectile weight in pounds (default = 15).
Enter a legal plate thickness in inches (default = 3). The program prints the minimum and maximum allowed entries.
Enter the plate quality factor (default = 1 (STS standard)). This is found in my armor metallurgy document in my section under Naval Technology at the NAVWEAPS.COM Web Site. It is based on an average weight of the armor under test that will give the same stopping power as an STS plate. So if the plate requires it be, say, 1.2" thick to stop the projectile at the same striking velocity and impact angle (obliquity) that a 1" plate of STS needs to stop it, the plate quality here is 1/1.1 = 0.909. Unless I know otherwise, I use a single quality factor per armor type, ignoring thickness, though some armors may be better at some thicknesses than at others, so this might have to be adjusted when changing the thickness by much, compared to STS of a similar thickness. (In most entries, only three decimal places after the decimal point are used, being rounded to the nearest third decimal place if more are entered. There are a few exceptions, called out when reached.)
Enter the Percent Elongation of the armor used (default = 25 (best)). This only has an effect when the projectile diameter is over 8" and the Percent Elongation is under 25%, by adding in an extra scaling factor that reduces the plate resistance to larger and larger projectiles on top of the small built-in effect used with STS. 25% is the maximum input here and is the value used for wrought iron and WWII US STS and British NCA - most older homogeneous, ductile steel armors and some other WWII armors, such as WWII German Krupp Wh, and most other non-armor steels have a smaller value and suffer this added penalty against large projectiles. This effect was found by me in the German Navy penetration tables for Wh in the WWII German document G.Kdos. 100 and I added it to M79APCLC and to this program, too.
Enter the obliquity angle of the impact direction of motion on the plate face (default = 0 = right angles ("normal")). (Note that the nose of the projectile might be tilted to the direction of motion (yaw) due to a prior impact or poor spin stabilization. If so, this has to adjust the obliquity if small (yaw up to 10 degrees or so), but it can require more radical adjustments if larger than that. This yaw can be in any sideways direction and, due to the projectile spin, the nose tip would be corkscrewing (nutating) around and around as the projectile moves forward, so it might help or hurt penetration or merely cause the projectile to be deflected sideways as it penetrates. Complicated, if used.)
Enter the percent weight of the windscreen (default = 0 (no windscreen)). Maximum allowed is 25%, which is big enough to include all possible windscreens, most of which are well under 10%. The Windscreen Effect is adjusted from the default 5.1% table value by the percent weight variation used. The Windscreen Effect changes whatever it is attached to, the AP cap face shape if an unshattered cap, but the actual projectile nose shape for a shattered cap or for a Hood (the Hood Effect accounts for its change to the penetration ability of the projectile nose and is a separate adjustment to the Windscreen Effect - the Hood and Windscreen Effects are rather small and trying to mix them together serves no real purpose, so each is done separately, even though they are layered on the projectile nose).
If the windscreen weight was non-zero, enter a multiplier (default = 1 (no effect)) to adjust the default program computations for the given test conditions when the windscreen is not a simple hollow sheet-metal "dunce cap", but has such things as German WWII last-version APC with aluminum as its windscreen metal, US WWII AP with its cut-out holes, Japanese WWII AP with its break-away attachment threads, or French and British WWII "K"-shell explosive charges in the tip, most changes to allow a dye bag to mark the impact as from a given ship if more than one ship is firing on the same target. These adjustments make the windscreen act like it was of lower weight, never higher weight. If the windscreen multiplier is 0.75 or more and plate thickness under 0.03 caliber, or the windscreen multiplier is 0.4-0.749 and the plate thickness is under 0.015 caliber, then the windscreen does not get damaged by the impact and the projectile has no effect from any windscreen, AP cap, or Hood, just the long pointed shape of the windscreen itself (using M79 nose here, but will change that when nose shape effects added). Otherwise, the windscreen is crushed and torn apart during penetration, causing the Windscreen Effect that increases the minimum penetration velocity be a few percent. Some examples are printed on-screen. This entry is skipped if no windscreen.
Enter the percent weight of the Hood (default = 0 (no Hood)). Maximum allowed is 50%, which is big enough to include all possible Hoods, most of which are well under 10%, though a couple were made thicker to make the shells/bombs more easily dive at a high obliquity impact with the ocean. Hood effects are adjusted from the default 5.2% table value by the percent weight variation used. (If this is non-zero, then there is no AP cap, so AP cap weight is zeroed, if previously used, and all cap entries previously made in a prior loop of the program are reset to their initial conditions at program start. Cap computations are then skipped.) There are no adjustments to Hood effects other than adjusting the weight. The Hood is knocked off the nose by a 0.0805-caliber plate, but remains intact and penetrates the plate up through a 0.1-caliber plate. This is still causing the Hood Effect, but the intact-version. Above a 0.1-caliber plate, the Hood is broken up and does not penetrate, giving the Hood Effect's broken version. The windscreen's existence does not change these values, just adds to it. The Hood effect changes the nose shape it is fitted to.
If the Hood entry is zero, then any previous Hood values in the program are reset to their initial state, the Hood logic is skipped, and the program asks for the percentage of the AP cap (default = 0 (no cap)). As with the Hood, the maximum value is 50%, with some caps having over 25%. Caps vary a lot in weight.
If the AP cap weight is non-zero, the following series of entries must be made:
Cap Number: #1 = 130-lb WWII US Navy 6" Mk 35 MOD 5 AP shell (20% cap in the tables) & #2 = 15-lb WWII US Army 76mm M62 APC (actually an APCBCHE in strict Army terminology) Shell (13.9% cap in the tables). These have different weights and Shattered Cap Effect Tables. The Intact Cap Effect and Cap Edge Effect can be used identically with either cap, though in real life they differed in the Intact Cap Effect for each (#1 had it and #2 did not). The Cap Edge Effect was identical with each, since the cap face shape was identical (50° face angle cone with a rounded tip, as will be discussed in more detail later). The change in the cap weight affects the Shattered Cap Effect results differently, depending on the answers to later questions below. There is no default, so the user must enter "1" or "2" here - or hit ENTER by itself to repeat it if this was already used on the last program loop without ever setting the cap weight to zero or entering a Hood weight instead in-between - to continue. (If other caps ever get reliable information, I may add more to this list, with various changes in all of the AP cap effects tables, as needed.)
If the AP cap is not a hard cap ("hard" means that the entire face portion over 350 Brinell, usually over 475 Brinell, though the bottom region touching the projectile nose might be softer than this), then it is either "soft" (usually of mild steel and up to ~230 Brinell) or "tough" (made of a high-nickel-content armor-type steel and usually ~250-300 Brinell), which makes a difference against face-hardened armor but here both are lumped together as "soft" against homogeneous, ductile steel armor. The two caps here are both normally hard, answer "N" for NO to the soft/tough cap question, but the user can switch to alternate logic by selecting "Y" for YES, if desired. In the logic, a regular hard-faced cap (can be modified by later entries below) changes the Shattered Cap Effects Table results for that Cap Number by multiplying the ratio in percent weight to 20% for Cap #1 (% Wt/20) or to 13.9% for Cap #2 (%Wt/13.9) times 0.6, making the effect of weight changes less. For a soft/tough cap, this multiplier is changed from 0.6 to 0.5, making the Shattered Cap Effect even less different as percent weight is changed. This has no effect on the Intact Cap Effect, if used, but the Cap Edge Effect is always zeroed for soft/tough caps, as their edges would flatten out on the plate due to their softness. The user must answer "Y" or "N" here to continue.
If the Cap is Hard, then this question is asked (it is skipped for a soft cap):
Is the Cap Edge Deeply Notched (for windscreen threads) or is the Cap extra-hard (over 600 Brinell)?
If "Y" for YES (to either or both questions), then the Shattered Cap Effect weight multiplier is changed from 0.6 to 0.75, increasing the effect of changing the cap percent weight from the table values for that Cap Number. This has no effect on the Intact Cap Effect or Cap Edge Effect. Note that the notched edge has nothing to do with the sharp corner used by the Cap Edge Effect, since this answer changes the results even at right angles impact, where the corner of the cap has no effect whatsoever. The notch is causing added stress to the cap on impact, just as if it was harder and more brittle, making the weight of the cap act differently. I think this is due to the cap beginning to break up earlier as it is crushed against the plate face when the notch or higher hardness exists, so the cap weight is removed from the penetration process more rapidly than if the cap is of moderate hardness and does not have this notch, making the Shattered Cap Effect more sensitive to cap weight changes. This makes sense, since the softer caps, which take longer to crush since they do not break apart like glass due to the impact, have a lower sensitivity to weight changes than the regular hard cap. The user must answer "Y" or "N" here to continue.
Enter the decapping plate thickness in calibers (default = 0.0805). This default is for my Type 1 AP caps held on by anything but high-temperature extra-strong solder against any kind of steel plate at any velocity that penetrates (or close to it that doesn't) at any obliquity of impact. This means all caps except my Type 2 caps. My Type 2 caps have a 0.1605-caliber decapping thickness against any kind of steel at any obliquity, and are (1) all German Krupp soldered AP caps (at least from 1911 to 1945) or (2) some US Army WWII APC projectiles made by companies that used such solder in their normal commercial business (varies with the specific manufacturer). I allow other values here, but those are the two I usually use. The program's Cap #1 is really a Type 1 cap and Cap #2 is really a Type 2 cap, but the user can change this as desired for any cap by simply changing this entry or using the default for both cap types. The Intact Cap Effect cannot occur unless the projectile is decapped by the plate it hits, so this sets a floor thickness for this effect, when applicable. Other Cap Effects not affected. This value can be 4 decimal places, not just three.
Enter a Shattered Cap Effect multiplier (default = 1 (no effect)). This adjusts the results of this Cap Effect if the changing of the Cap Number, the cap weight, and the notched-edge/hardness answer to change the weight modification multiplier do not give Shattered Cap Effect results that match the test results (for other caps than the two given in the program). This multiplier shifts the table results up or down over the entire table by the given multiplier. Has no effect on any other Cap Effect. This is needed, at the very least, when cap hardness changes the results, even for a cap very similar to one of the two given. For example, the late-WWII 6" Mk 35 MOD 9 and 10 AP projectiles had caps very similar to the MOD 5 Cap #1 design, but were hardened to 650-680 Brinell entirely through from the face to the bottom pressing up against the projectile nose (only a narrow edge was kept soft for the crimping into the shallow pits ringing the lower nose, as used with all other US AP caps from WWI through after WWII). They had all of the Cap #1 table Shattered Cap Effect values increased by 40% (multiplier = 1.4), probably due to the higher brittleness of these caps making them break up earlier in the penetration process and thus interfering in the penetration as broken pieces prior to that happening with the previous softer MOD 1-8 caps. These other hardness values were 514 Brinell maximum for the MOD 5 cap and 494 Brinell maximum for the 76mm M62 APC cap, which were rather low hardness values by most WWII AP cap face standards, including all larger US Navy AP projectile caps (555-580 Brinell, with the 8" Mk 21 MOD 5 being hardened to the same super-high level as the 6" Mk 35 MOD 9 & 10), smaller US Army APC projectile caps, such as the 37mm M51B2 APC caps (625 Brinell), and almost all newly-designed foreign projectile AP caps, Army or Navy (600-625 Brinell). The Shattered Cap Effect will probably be the one that is most unique to each projectile cap design and hardness contour when information is available on this (several tables with associated multipliers needed).
Enter the cap shatter minimum plate thickness in calibers at normal (OB = 0) (default = 0.66). The indicated linear decreasing formulae (in two steps, rather steep at 0-55° and then much less steep at 55-65°, with the value constant above 65°) printed out on the screen will adjust this for all other obliquities automatically, when a new normal value is entered; that is, (NEW NORMAL VALUE/0.66) multiplies all of the calculated default values over all obliquities. The default value is for Cap #1 if used as it really was. The value for Cap #2 is 0.528 (80% of Cap #1 value), with both changing to 0.33 if you select a soft/tough cap (the softer cap flattens out - "mushrooms" - on the plate face against a thinner plate at all obliquities, which gives the same results as shatter does), though these other values must be entered by the user manually, if they are being used instead of 0.66. The user can select any value desired, greater or less than 0.66 (almost all projectiles tested by the US Navy in WWII had this value lower and only one had this value equal to the MOD 5 cap, but this is not a hard-and-fast rule). Like the Shattered Cap Effect itself, this is probably unique to each cap design chosen. The use of a linear formula for the drop in the shatter plate thickness from the entered value as obliquity of impact increases is due to the fact that each cap has a different shape of curve for this drop (the MOD 5 has a minimum shatter thickness curve that slowly drops this shatter thickness at low obliquity and drops faster and faster as obliquity increases, from my own plot of the results - but for a universal formula for all projectiles, the "best-fit" linear drop versus obliquity with the flattening out at over 55° is the curve that has the least error for all of the projectiles tested by the US Naval Proving Ground before, during, and after WWII. This is one of those "better is the enemy of good enough" cases, since the details are missing for most projectile caps over most obliquities, so something has to be used when partial data is available for a "ballpark" estimate. (Remember, the purpose here was to try to make the optimum projectile design against the widest range of armored targets and, conversely, to make the best average plate design to resist the most enemy weapons of the kind the protection was designed to withstand - trying to stop a 15" AP projectile with cruiser-thickness armor is not going to happen, even if the projectile is a poor example of its type, so the range of weapons being protected against has to be evaluated using some averaging method, such as this use of a linear approximation for the plate thickness drop versus obliquity for cap shatter.)
Respond to the question concerning if the Intact Cap Effect is to be allowed? In real life, only the projectiles with rounded noses under about 0.866 caliber Nose Height (under the Nose Height of a 1-caliber Radius of Ogive tangent ogive nose shape) have this effect at oblique impact as the decapped cap can tilt sideways by some angle by sliding along the nose, so only Cap #1 in reality has this effect. The user can select it for any projectile he wants to test variations in nose shape. This effect only has a non-zero positive value, making penetration more difficult by raising the penetration velocity, for a range of obliquities from 40-75°, being zero at these extremes and maxing out at an 8.2% tabled value against a plate under the shatter thickness at 55°. When the cap shatters at some plate thickness, this effect instantly disappears. The question must be answered "Y"/"N" before the program can continue. If not allowed, the word "INHBTD" (inhibited) will appear on the Output Screen, replacing its Cap Effects Multiplier value (see H, below) and the effect's value will be set to zero.
If the Intact Cap Effect is allowed, the user can now enter a multiplier as to how large it is that changes the entire table in the program by that multiplier amount (default = 1 (no effect)). This exactly like for the Shattered Cap Effect. If the Intact Cap Effect is not allowed, this value reverts back to 1, no matter what it was set to previously and this entry is bypassed.
Respond to the question concerning if the Cap Edge Effect is allowed. If so, then the default 12% drop in the minimum penetration velocity will apply unless it was changed to some other value in a previous loop. Both Cap #1 and Cap #2 have this same effect in reality, since they have the same 50° face angle design. It does not involve the cap's weight, just its face shape. Only caps with a relatively sharp corner to the face edge will have this effect and it probably varies with the sharpness of the corner. The -12% constant value is used with US WWII face angle caps with the sharp upper edge inner angle of about 133-135° due to the sides cap curving inward somewhat as the nose tapers for streamlining. Sharper corners probably have a greater effect, but I do not have the information at present. There is a minimum plate thickness to allow this effect, which decreases with increasing obliquity, due to the minimum speed of the cap edge in the direction parallel to the plate face being fast enough to scoop up material in front of it until that material forms a wall that impedes ricochet enough to allow a transverse notch be cut into the plate face much like a wood plane, rather than just the long slot being opened in the bottom of the extended gouge in the plate that happens with noses without this corner. The effect only occurs at 50° obliquity angle or more, with the appropriately-thick plate. This effect is roughly as large as the Shattered Cap Effect, but is a constant, not varying with obliquity or plate thickness like the latter. The question must be answered "Y"/"N" before the program can continue. If not allowed, the word "INHBTD" (inhibited) will appear on the Output Screen, replacing its Cap Effects Multiplier value (see J, below) and the effect's value will be set to zero.
If the Cap Edge Effect is allowed, the user can adjust the -12% change to any other negative value he desires (never a positive value, as this effect always IMPROVES penetration ability by LOWERING the penetration velocity). This question is skipped and the value set back to -12% if the Cap Edge Effect is not allowed, no matter what it was previously set to.
If the Cap Edge Effect is allowed, the user can multiply the minimum thickness that allows it within a range of 0.1-2 (default = 1 (no effect)). This multiplier changes all of the values in the curve of this minimum with obliquity. The tabled values are 0.5-caliber plate minimum at 50° obliquity dropping in a smooth curve to 0.106-caliber minimum at 80° (extrapolated from the highest 75° value in the tables). If the Cap Edge Effect is not allowed, this value goes back to 1, no matter what it was set to on a previous loop of the program and this entry is skipped.
This ends the AP Cap inputs.
If the user has some minimum penetration velocity from some other source that is different from the M79-nose used in this program, he can now enter it and it will be used in place of the M79 value calculated (the M79 value is still calculated and displayed in the final results screen display, just not used). This value will skip the Base-Through Penetration logic, due to no data on such a thing here. As with the other velocity values input, calculated, and/or displayed, this velocity value must be only in a whole number (no fraction allowed). The minimum is 1 foot/second and the maximum is the maximum allowed input for computations, 3500 feet/second. Above this velocity, the penetration equations start to change so much due to heat distorting or even melting the armor and projectile and intense shockwave effects that this program will no longer give even a ballpark result. The program will run its calculations into this forbidden region and display them, but they are not reliable and are only there to allow the calculations to complete, because sometimes the final answer is within the allowed region even if an intermediate value is not (as when the -12% Cap Edge Effect comes into play). By setting this to zero or a negative number (becomes zero immediately), this input is skipped in the program logic and the regular M79 computations are used everywhere.
The user now enters the projectile striking velocity that is used to see if the projectile penetrates or not and what is estimated as the remaining velocity and exit angle of the projectile when it goes through and comes out the plate back, if it penetrates. The program does not estimate ricochet angles for non-penetrating projectiles. Whole positive numbers only (not zero or less). Maximum value is 3500 feet/second, just like the user-defined minimum penetration velocity given above.
This is the end of the user inputs. The next keyboard entry displays the results.
Included is a large set of graphs generated by a set of Microsoft EXCEL Spreadsheets of typical velocity adjustment values generated by the program for a 3" 15-lb AP projectile with Cap #1 of 12% weight and a standard sheet-steel windscreen of 3% weight (no holes, break-away threads, or explosives involved). The windscreen and cap adjustment values are shown separately and added together as they would be in a real projectile.
Also included is a graph of a set of STS Plate Thickness versus Navy Ballistic Limit (NBL) curves for 0, 15, 30, 45, 60, and 75 degrees for the bare-nose 3" 15-lb M79 AP Shot projectile and this hypothetical capped AP projectile of the same diameter and weight, both with and without the Intact Cap Effect added to the Shattered Cap Effect and Cap Edge Effect. This hypothetical projectile does not have a deeply notched upper edge to its AP cap; has a hard, but not especially hard, cap; has the standard WWII US Navy cap shape similar to the 6" Mk 35 MOD 5 (used Cap #1 here), but thinner and lighter (more like the typical AP projectile used by most nations). Only the program defaults were used for these calculations.
Cap #2 would give similar plots with only the Shattered Cap Effect results varying slightly from the ones given.
Included are plots of the adjustments the program generates for a hypothetical SAP projectile with a 5% Hood and a 5% windscreen. No thickness versus velocity plot was made. The sum of the Windscreen and Hood Effects are much smaller than those with the AP cap and windscreen given above.
In these graphs, it is noted that there is a minimum effect for each usually near 0.5-caliber plate thickness, but ranging down to about 0.4-caliber plate thickness for a small cluster of curves. This is due to these effects making the nose act like it was blunter, so it more closely resembles a flat-nosed projectile. Flat-nosed projectiles have their minimum energy per plate thickness to penetrate at 0.55 caliber plate thickness. The reason for this is that dishing is at its minimum (only a very narrow region around the impact site and the dish formed is very shallow), so the wider dish formed by a flat-nose projectile compared to a more streamlined pointed projectile has it smallest possible negative effect, and the petals formed sticking out the plate back by a pointed projectile forcing the armor backward and sideways into a jagged ring around the exit hole are at their thickest (the full 0.5-caliber plate thickness), so the shearing action by the flat nose in the very narrow highly-stressed zone around the edge of the hole to make a plug that eliminates these massive petals completely has the greatest effect in reducing the energy needed to penetrate compared to a pointed projectile. Thus, the closer to flat the nose acts the more there is going to be a reduced effect in the increase in energy needed to penetrate caused by most of these nose-covering effects (here only the Cap Edge Effect at higher obliquity actually assists penetration; all other effects hurt it to some degree or other). Those few curves between 35° and 50° for the Hood and windscreen that are biased to a minimum at a lower plate thickness (around 0.4 caliber) are caused by the Hood and windscreen material being trapped to a larger degree under those conditions than the rest between the projectile nose and the plate, so they add to the plate thickness by a small amount, resulting in the plate thickness by itself being thinner to still act like it was close to 0.5-caliber total, the minimum for the other curves where this is less evident.
NOTE: These graphs are for imaginary projectiles and were made using an early version of the program prior to adding all of the "bells and whistles" of the current version. Some of the values will now be slightly different in the latest program version. The results do show the basic output parameters of the program.
It is still based on the WWII 15-pound US Army 76mm M79 AP Shot projectile (tangent ogive nose shape with a Radius of Ogive or British "Caliber Radius Head" (CRH) of 1.66 caliber (5" for the 76mm shot)), but now some additional capabilities have been added to that basic program. This is by no means the final homogeneous armor penetration program because it does not handle other nose shapes internally. Neither does it handle projectile body or nose damage.
Since the portion of the projectile that is penetrating the plate is no longer usually the entire projectile if any nose coverings are included (except against thinner plates), the approximation for the remaining velocity has been changed as follows:
- The M79 program calculated the energy that was left after
penetration by subtracting the amount lost to penetrating the armor
from the original energy of the projectile on impact. The amount
lost is the entire energy when the projectile stops inside the plate
at just below the US Navy Ballistic Limit (NBL) - the minimum
Striking Velocity where complete penetration entirely through the
plate (all of the projectile body if intact or 80% by weight of the
body if broken up) just barely occurs - at low obliquity, but at
higher obliquity the projectile NEVER stops; it merely goes from some
velocity of ricochet away from the face to some, somewhat lower due
to the energy lost in the plate, Remaining Velocity (it can be
thought of as "ricocheting" off the plate back into the
target) at some Exit Angle through the plate. Therefore, I came up
with the following approximation:
I used the "True" NBL for up to 45 degrees obliquity (OB, obliquity angle of direction of motion of the shell compared to the imaginary "normal" (right-angles) line sticking out of the plate face at the center of the impact, which is measured with zero being normal), where "True" meant the nose-first NBL as calculated by the program (all impacts below 65 degrees obliquity are nose-first), as the zero remaining energy value [Penetration Kinetic Energy (KE) = 0.5*W*(NBL)2]. I assumed that this was a constant no matter what the Striking Velocity was (this was good enough as long as the method the projectile uses to penetrate does not change and as long as no armor plug is ejected that soaks up additional energy, which does not happen with the armor and pointed projectile nose used here)
I created a virtual "Energy NBL" for impacts over 45 degrees as the amount of velocity lost to the plate during nose-first penetrations. I first estimated the amount of energy lost to deforming the armor to let the projectile through using a set of rules in the program, which I hope are not too much in error, then I changed this back to the Energy NBL value by reversing the KE formula, since the projectile weight was unchanged during the penetration. Note that base-first penetrations, when they occurred in the program (only limited obliquity and plate thickness combinations allowed this), were rather undefined as the projectile "surfed" along the thin plates that allowed this, nose-up and the projectile base dragging through the slot being made until the projectile either flipped out on the face and skidded outside the armor until it stopped or the plate ended or the projectile widened the end of the slot so that it tumbled backward base-first through the hole at a random Remaining Velocity (VR) and Exit Angle (EX) - they had no well-defined VR (other than it being very low) or EX (anything from zero - right angles to the plate face measured out the back - to almost 90 degrees, depending on how the plate was bent/torn at that instant just before the projectile fell through).
I then subtracted either energy type of a) or b) divided by (W/2) to eliminate the common weight in all of the terms, as applicable, from the square of the Striking Velocity to determine how much energy was left and then took the square root of that to determine the VR of the projectile as if it was not deflected as it went through the plate (see 2, below for deflection effects). This is simple conservation of energy. As mentioned, the Energy NBL was my best rough estimate of the split between the energy lost in penetrating and the energy retained at an oblique impact where the projectile was never stopped even at the NBL, merely going from a ricochet velocity to a somewhat smaller remaining velocity as the NBL was passed.
A deceleration due to the projectile being deflected sideways from its original path at an oblique angle was added, using the COSINE of the Deflection Angle (OB - EX), where all angles are measured from the imaginary "normal" line sticking out at right angles to the plate face and also sticking out through to the plate back and all angles are in the same plane (as a first approximation) to further slow the projectile down. This is also a rough estimate of further loss in energy due to the projectile twisting through the plate going nose-up to nose-down as the forces shift during penetration. It gives a number that can be used for further penetration and fuze delay computations with some reasonable chance of being near the middle of the range of possible values that will change from impact to impact in real life (the world is complicated!!). Note that a deflection will mean that the projectile nose was wrenched from its original direction of motion to a new one in the plate thickness, so a spinning projectile will wobble and its nose corkscrew (nutate) along its new path until it damps out, which is unlikely to occur in the time it is inside the target, even if the fuze does not blow the projectile up first. Even a small deflection of 10 degrees or so has caused major wobble and nose yaw (tilt sideways from the direction of motion of the projectile as it moves forward) in tests, though with heavy, relatively low-velocity (compared to Army anti-tank projectiles at close range) naval AP projectiles this yaw effect usually does not cause major effects on later motion through the target until it exceeds 30 degrees or so.
Since usually only a portion of a projectile with a windscreen, Hood, and/or AP cap penetrates, the penetrator retains part of the energy still left after penetration is done, with the portion that was crushed and torn off the projectile and did not penetrate retaining the rest. How much should be apportioned to each? This is also a value that will change from impact to impact, but as a rough center-of-the-road estimate, I simply assume that the energy lost to all of the projectile mass is the same, so that the penetrating portion of the projectile retains the fraction of the undeflected projectile energy calculated by the ratio of the penetrator weight divided by the total weight prior to impact, using the calculation in (1), above to give the original energy total. The deflection loss in (2), above, is then applied to the penetrator's portion, giving the final remaining velocity of the penetrating portion of the projectile. Tah-dah!
If a plug was punched from the plate - not calculated in the current program but of importance when this is added due to nose shape effects later - then a further split between the energy absorbed by the plug from the penetrator would have to be included, as I did in my FACEHARD program. The reason I have not included the plugging process here is that I need more information on the effects of nose shape on the size and speed of the plug punched out. A flat-nosed projectile always has a full-caliber-width sideways and a full-caliber-width-times-COS(EX) lengthwise (both measured parallel to the plate surface) and a full-plate-thickness depth plug punched out at roughly right-angles to the plate face no matter what OB or EX is (this is possibly different in a tapering plate with a back not parallel to the face, but I have no information on such a difference and am thus ignoring it). Since we are always talking about steel here, the weight of this plug is thus 0.283 lb/cubic-inch times the volume of this plug (circular or elliptical surface time plate thickness). If the projectile shatters, the hole made, at a considerably higher striking velocity than the intact-projectile NBL, is wider and more jagged and I estimate that such a hole requires roughly twice the Striking Velocity (that is, doubles the NBL value) for a hardened steel AP or SAP projectile (probably more for a soft wrought iron or steel projectile or for fragile cast iron projectile). This increase due to shatter is much higher than that for a face-hardened plate, where the striking velocity only goes up by 30% in most cases, though this ranges from 20-40% under some circumstances (see my FACEHARD program). For a truncated-ogive or -cone flat-nose projectile, where the flat portion is only a fraction of the total projectile width (percentage flat is under 100) and which has a sharp corner around the edge, the width and length of the plug is that of the flat area, just like a full flat-nose shape, with the material to all sides of the central plug being pushed outward just like if the hole were made by a pointed projectile. The thickness of such a plug decreases faster than the width does, down to a minimum. For a tapered flat nose of 0.5-caliber (50%) flat width or less, the plug is ALWAYS ONLY HALF of the plug sideways width (0.125-caliber thick for a 0.25-caliber-wide (25%) flat face, for example), linearly dropping from full plate thickness for a 1-caliber-flat-nose (100% face) to this reduced thickness at half-caliber:
used only for a FLAT PORTION PERCENTAGE > 50% - always use half of the flat portion percentage for noses with flat nose diameters under 50%. Note that this outer ring of material has a very large volume even if it is narrow, while the central plug may only have a rather small volume even if it has an appreciable width. For example, if the central plug is formed in a 1-caliber-thick plate by a right angles impact by a tapered flat-nose of 67% (0.67-caliber) width, the volume of the plug is
while the remaining edge-ring of material in the hole itself that is deformed and opened to make a caliber-wide final hole (ignoring the added material surrounding the hole that is deformed, too) is
which is MUCH larger than the plug, so that the plug velocity has rather little effect on the NBL and VR of the projectile after complete penetration, though accelerating the plug does cause some effect, of course. This seemingly small change in flat portion from 100% to 67% here also has a major effect on the penetration equations at all obliquity; even an 85% face has major effects, as the differences between my Flat-Nose Penetration Computer Program and my Tapered Flat-Nose Penetration Computer Program show.
To further complicate the plug-creation topic, if the projectile has a curving blunt-tipped nose, even if it has a very blunt point, then above some Striking Velocity the plate material near the center begins to find it easier to shear out as a small plug than bend and tear apart in the sideways direction, with the width of this central region getting wider and wider as the Striking Velocity increases from that point on. Thus we get a small plug that gets bigger and bigger with increasing Striking Velocity above the NBL and this has more and more effect on the NBL and the VR as the projectile loses energy to the accelerated plug on top of that needed to tear open the hole. What information is needed here is the size of the plug and the Plug Velocity under all possible conditions, since this will immediately give an energy loss to the projectile which otherwise would go into the plate to make the hole bigger and/or remain in the projectile after penetration. Thus, the plug reduces the VR of the projectile and increases somewhat the velocity needed to penetrate, though the plugging/shearing out around the edges of the plug may require less energy than bending the entire volume of armor in the projectile's path, so the net effect may be to reduce the NBL, which gives more energy for VR, countering the previous discussion's logic. I have to do much more work on this, unfortunately. Things are getting even more complicated!!
Plug punching has a further major effect on penetration due to its effect on the ability of the projectile to remain unshattered during penetration. 100% flat-nose steel AP projectiles, even especially-made, optimally-hardened, extra-heavy projectiles that required somewhat lower striking velocity than the typical WWII 3-3.5-caliber-length AP projectiles, in tests I have seen have ALWAYS shattered completely into pieces against armor or even high-strength construction steel above one-caliber thickness at right angles (where the shatter-causing thickness is greatest), with shatter or major deformation in softer projectiles ("mushrooming" of the nose so that it is wider than it was prior to the impact) setting in at an even lower plate thickness in many cases. I use 0.8-caliber as the thickest plate of STS/Class "B" armor that a typical AP projectile with a 100% flat face can penetrate unshattered at zero degrees OB (in reality with a large "Zone of Mixed Results" below this thickness), though it might be possible to make an optimum 100%-flat-nose AP projectile with the ability to remain unshattered against 1-caliber STS plate, as with the tests against the construction steel. I assume that doubling the striking velocity will allow pieces of the shattered projectile to punch through the plate, but between the computed unshattered NBL (extrapolated from thinner plate tests) and this estimated double-the-NBL shatter value, NO PART OF THE PROJECTILE PENETRATES and in many cases not even a full-thickness hole is made in the plate! Why is there this "lid" on the ability of a 100%-flat nose shape to remain unshattered at 1-caliber plate thickness at and above the extrapolated NBL from thinner-plate impacts against the same material with the same projectile? I have a rough theory, which may not be complete. As with billiards, when the cue ball hits a numbered ball at right angles, both balls being identical in size and weight, the cue ball stops cold and, except for some energy lost to making the "click" sound and in heat inside the balls, the numbered ball INSTANTLY (to all intents and purposes) moves forward with the cue ball's full speed. When a 100%-flat-nosed steel projectile hits a flat iron or steel plate at right angles, the same thing happens, with the loss of energy being that needed to shear out the full-nose-width, full-plate-depth plug of armor around the edges being added to the sound and heat loss from a cue ball impact, but otherwise the logic is identical. The thickness of the projectile nose equal to the plate thickness stops cold and the entire-plate-thickness plug moves forward with only the energy losses just mentioned, otherwise at near projectile full impact velocity out the plate back. What happens to the stopped portion of the projectile nose under this situation? If the plate is thinner than a full projectile caliber, then the stopped nose is immediately subject to re-acceleration by the rest of the projectile body still moving forward behind it to some somewhat lower speed as the energy in the unstopped projectile portion is re-distributed into the stopped nose portion to get a final exit velocity of the projectile, while the plug finishes shearing out around its edge and then accelerates to full exit speed during this same time, so that the stopped nose is only subject to its inertia and the force from the back, never touching the plug again. The nose material is able to remain intact from such an abrupt re-acceleration. On the other hand, if the plate is over 1 caliber thick, when the same thing occurs to the over-1-caliber-thick stopped nose portion of the projectile, the time it takes for the shearing action around the edge to move the full plate depth, allowing the plug finally to move forward, is so long that the re-acceleration of the nose from the back begins while the front face of the nose is still pressed up against the plate plug, which has not yet started to move. Also, the compression waves from the sides of the nose due to the extremely abrupt impact shock stopping the nose portion now have time to reach across the nose from edge to edge before the nose starts moving (for thinner plates the plug is punched out prior to the shockwave edge effects reaching across the full-caliber-width of the projectile nose). We now have a stopped nose portion "saturated" with forces from the back trying to speed it up again, from the front resisting this speed up since the plug has not yet torn free of the plate, and from the edges of the impact area due to shockwaves that now can reach every part of the face of the nose from all sides. These shockwaves ricocheting around in the nose most particularly have a bad effect on the opposite edge of the nose, since when a shockwave reflects, the edge experiences both an acceleration outward from inside due to the initial shockwave impact AND a second almost-instantly-after acceleration outward as the shockwave reflects and moves back inward into the nose (action-reaction). This "double-whammy" where shockwaves reflect is why objects subject to them break at such points and not at other places. In this thick-plate case, the shockwaves finally cause their nose-edge damage PRIOR TO the plug being finally punched out and PRIOR TO the re-acceleration of the projectile nose portion from the rear, so the weakened nose now shatters under the combined crushing pressure from the back against the still-not-moving plug in front. In fact, it might be IMPOSSIBLE to make a projectile with a full-width flat nose remain unshattered (or, if soft, "un-mushroomed") against a plate of similar material (here steel) of over caliber thickness when hitting that plate at right angles at a striking velocity that should completely punch through it when extrapolating from test of thinner plates where the projectile nose did not shatter. It IS impossible to get a projectile that punches a plug of weight equal to or greater than the full projectile weight to ever penetrate, even if hitting at the speed of light, due to this effect.
To get things even more complicated, the adjustments caused by the windscreen, Hood, and AP cap are to some baseline NBL value. What NBL value is being changed? You can't change something unless you have knowledge of what the thing was BEFORE it was changed, can you? This turns out to have several answers, depending on what has happened:
The NBL always uses the full weight of the projectile in its baseline, no matter what that NBL calculated from it might be. The Windscreen, Hood, and AP cap subtract weight from the projectile body - the body is the minimum "penetrator" that goes through the plate, though sometimes one or more of these nose coverings also penetrates, so their weight has to be added to the penetrator - but this is ignored in computing the NBL as long as they are attached to the projectile on impact, no matter what happens to them afterwards.
The program currently only computes the US Army 76mm (3") M79 AP Shot projectile nose shape as the baseline for the NBL (very simple tangent ogive (arc of circle) pointed nose shape with a Radius of Ogive or British "Caliber Radius Head" (CRH) of 1.667 caliber (5") and a center at the same level as the joint between the cylindrical side and curved nose, so that there is no crease or shoulder at the joint) regardless of the actual nose shape. If you want, as mentioned below, you can substitute another NBL from some other source as the baseline to be modified by the program due to the windscreen, Hood, and AP cap effects noted. I hope to add other nose shapes in the future, but this is complicated. If you input your own NBL, the Base-Through Penetration logic is bypassed, as I have no data on such a case at the moment.
If the plate hit is so thin that even the windscreen , if used, is undamaged, then the projectile has no windscreen, Hood, or AP cap effects, just a long pointed nose using the M79 NBL computation or the user NBL entry, as desired.
If the windscreen is crushed, using the Windscreen Effects portion of the program, the user can modify the default values, if he wishes as described in the on-line program instructions at this point, which are based on a US Navy 6" Mk 27 MOD 7 Special Common projectile 5.1% (of total weight) long pointed simple one-piece mild-steel sheet-metal windscreen (longer and heavier than most). The default values used by Dr. Hershey are when no special modifications to the windscreen were used - for example, many holes were cut into the windscreen in some US Navy WWII AP projectiles to let water in and out to allow an internal dye bag to color a splash of a near-miss to allow ships to identify their gunfire from another ship firing at the same or nearby target, which seems to halve the Windscreen Effect compared to a similar windscreen without such holes. Changing the windscreen in various ways, such as the holes just mentioned or in some other way, can be handled by these user-entered multipliers to the default effect created by the program. The Windscreen Effect is otherwise adjusted directly by the change in the percent of the total projectile weight that the windscreen takes up, with the 5.1% being the exact table output; that is,
Nose shape under the windscreen (actual projectile nose or Hood or AP cap) is not considered. The Windscreen Effect: (1) if an AP cap is used, it changes the NBL OF THE AP CAP FACE SHAPE, IF THE AP CAP REMAINS UNSHATTERED or (2) if an AP cap is used, it changes the NBL of the BARE NOSE OF THE PROJECTILE, IF THE AP CAP SHATTERS (that is, shatter of the cap exposes the bare nose of the projectile during most of the penetration process and it is the bare nose that is being altered by these effects) or (3) if a Hood is used, it changes the NBL of the BARE NOSE OF THE PROJECTILE (that is, the Hood Effect completely takes the existence of a Hood into account and the windscreen can ignore it).
The Hood is treated just like the windscreen in the program, being based on the same Special Common projectile test data with its 5.2% Hood. There is no user override entry, though; the table is used as is, merely with the projectile's % Hood weight divided by 5.2% being the only modifier. The Hood is a thin contoured mild-steel nose covering soldered skin-tight to the projectile nose all the way to the tip, with the only thick part being a ring around the base of the nose just above the point where the nose and cylindrical side join, which is cut with threads on its outer edge for matching the thinner threaded ring at the base of the windscreen to tightly hold the windscreen in place during gun firing and flight to the target. This unusual shape is more-or-less standard on all US or foreign versions of the Hood (of whatever name was used in foreign navies), being thicker or thinner and with a thicker or thinner base ring, depending on the exact design used. The Hood Effect in this program includes both thin plate impacts where the Hood is not damaged by the impact and penetrates through the plate along with the projectile body under it, though it might be knocked off like an AP cap as it exits the far side of the plate, and thicker plates where the Hood is scraped off the nose by the plate, peeling off the thin center region and splitting the threaded base ring into several pieces as it is squeezed between the projectile lower nose and the plate so that the Hood does not penetrate. Thus, the Hood Effect applies no matter what happens to the Hood during penetration, ignoring all nose shape adjustments. Hoods are still used with modern base-fuzed uncapped Common projectiles (Semi-Armor-Piercing (SAP) in the US Army) designed for attack of light armor. The Hood always modifies the NBL of the BARE NOSE OF THE PROJECTILE, with or without a windscreen.
The AP cap is different. It has three distinct effects in this program: The Shattered Cap Effect, the Intact Cap Effect, and the Cap Edge Effect.
The Shattered Cap Effect is a modification of the NBL of the NOSE UNDER THE CAP. The shape of the nose under the AP cap changes the effects: A set of German tests with identical caps (same weight and face shape and hardness contour) but bare nose shapes going from hemispheres to long points (tangent ogive noses up to 2 CRH) showed the that the blunter the nose, the lower the NBL at 45 and 60 degree obliquity against plates over 0.5 caliber thick where the cap shattered and did not penetrate (there were high-speed photos of the cap being pulverized during impact). The shatter exposes the bare nose during most of the penetration process, which now can change the penetration mechanics. (If there is a windscreen involved, then its Windscreen Effect changes the bare-nose NBL.) The shattered cap effect is different for different caps and is a major effect at low obliquity, ending circa 60-65 degrees obliquity with the caps I have data on. The weight of the cap has a major effect, with the cap design (hardness, notched edge, and so forth) varying how much changing the weight changes the effect from the standard weights of the two caps used (#1 or #2).
The Intact Cap Effect is a modification of the NBL of the AP CAP FACE SHAPE, where the cap attachment to the nose is broken (decapping) and the cap is held on the nose due to pressure from the plate, but in this case the nose is so blunt and round that the cap can slide sideways a small distance and act like a bent nose at a range of obliquity angles, increasing the NBL. The bent nose is that of the cap face, since the actual projectile nose is always hidden by the cap. If the nose is not the right shape to allow this effect, the cap stays aligned with the nose as though it was still soldered on (the cap falls off after the projectile exits the plate back) and the projectile simply uses the cap face as its nose shape. This effect is non-zero in the range 40-75 degrees obliquity, when the nose allows it to occur. It is not a large effect, even then. The weight of the cap varies with its thickness, so the weight of the cap has an effect here directly proportional to the weight compared to the cap (#1 or #2) selected.
The Cap Edge Effect is due to the corner of the face where it joins the tapering side of the cap - cylindrical (oldest caps), conical, or ogival in almost every case; if there is a windscreen, it usually forms the base of the windscreen, acting just like a very thick Hood and continuing the windscreen shape down to the lower nose (though there were a few exceptions where the windscreen covered the entire nose, including the original small-size AP cap, when modifying an old projectile originally without a windscreen to improve range) - where at certain obliquities against certain plate thicknesses this corner can dig into the plate and cause a crescent-shaped notch in the plate face that can impede ricochet, reducing the Striking Velocity needed to penetrate. It only occurs when the cap is hard enough compared to the plate so that the corner is not flattened out on impact - usually only hard caps will allow this effect, though some of the "tough" caps may be hard enough to cut the notch in such weak materials as wrought iron (I have no data, however). In this program, the user can substitute his own value for this effect and change the plate thickness that it occurs, but the default effect used is to change the NBL by -12% for all cases when it applies. The effect only occurs for obliquities of 50 degrees or more and for thicknesses of a minimum value or greater, this thickness dropping rapidly with increasing impact obliquity to the maximum obliquity of 80 degrees. The reason for the thickness requirement is that to dig into the plate face, the corner has to move forward fast enough parallel to the surface to scoop up enough armor material in front of it to overcome the pressure upward from the plate trying to cause the nose to glance off and ricochet. Without this sharp corner, here approximately 135-degree internal angle for the US Navy standard cap face with a 50-degree face angle (100-degree arc inside the cap symmetrical with the cap centerline) and a somewhat tapering upper side of the cap for streamlining with the windscreen, the projectile has to hit at a somewhat higher velocity to dig into the plate face deep enough to cause the same resistance to ricochet that the corner gives. The lower the obliquity down to the 50-degree minimum for this effect, the thicker the plate must be to be rigid enough for the cap edge to cause this scooping when its speed parallel to the surface is a lower fraction of its total speed. The speed to penetrate a thicker plate increases and so does this parallel-to-the-face component, so the parallel component becomes large enough to give the effect when the plate gets above a minimum thickness requiring a higher striking velocity. The greater the obliquity, the greater the parallel-to-the-plate-face component is and the lower the total striking velocity need be, with a thinner plate's NBL being good enough. This effect disappears below 50 degrees since at near 45 degrees and less, the striking velocity component into the plate at right angles is roughly equal to or, for lower obliquity, greater than, the parallel-to-the-face component and the cap edge notch would have no effect on penetration (it becomes a general nose-shape thing). The user can input other values than the defaults used, if he has some information from other sources on this. Cap weight has no effect here.
Normally, if the AP cap remained intact during the penetration (either aligned with the projectile nose or, with the Intact Cap Effect, tilted somewhat), you would simply use the cap's contour as the projectile nose during penetration. This is of course the thing to do. However, how do you do this in a systematic manner? While projectile noses can be a many shapes from flat to long points, usually either curved (ogival) or straight (conical) with a point or small flattened tip or rounded into something similar to an ellipse, which are difficult enough to try to sort out as to their effects on penetration (even assuming the projectile never is damaged in any way by the impact, which is not true in many cases), most such noses only have a corner or crease at the base where it merges with the cylindrical sides of the projectile and a point, if any, at its tip. AP caps, on the other hand, were not designed for streamlining (except as an afterthought and even then the streamlining used, if any, was not very good), but to protect the projectile nose on impact with face-hardened armor (and homogeneous armor when used with small, high-velocity projectiles at close range against tank armor, but we are talking about naval armor here hit at comparatively long range). All sorts of theories were put forth on how cap worked, most of which completely wrong, but many AP caps were made to conform to the latest "fad" theory of the moment. Interestingly, many of the shapes of soft and tough AP caps worked regardless of their original design theory simply because enough cap material was there and was tightly squeezed against all of the projectile's upper nose on impact, forming a doughnut-shaped ring completely surrounding the nose and touching it at all points in this ring, no matter what the original shape was, making them all act like the same basic design as far as protection to the nose went. Hard caps were less varied in their shapes, but still had some very odd shapes totally at odds to any bare-nosed projectile nose shape. Many had the face of the cap have a sharp edge corner with its inner angle ranging from nearly 90 degrees (British Firth and German Krupp post-WWI "Knob-And-Ring" AP cap shape - the shape of the top of the flying saucer in the 1951 movie THE DAY THE EARTH STOOD STILL), to about 100 degrees for the corner of the tapered 0.69-caliber-wide flat-face AP cap of all WWII Japanese Type 88/91/1 caps hitting at and above 45 degrees where their break-away Cap Head was flung sideways along with the destruction of the windscreen (weakened windscreen threads held on both), through about 133-135 degrees for the standard WWII US AP projectile AP cap with its 50-degree (half angle from the centerline) conical cap face (default for this program), and finally to about 150 degrees (the corner of the French contoured AP caps which covered the entire nose, only being thick as a tall hardened conical face just over the upper end of the projectile nose). The smaller the inner corner angle, the more effective the corner was at cutting the notch into the plate on impact. Some AP caps had contoured face shapes with no sharp corner at all, such as the British Hadfield Company AP Caps used during WWII on large naval APC projectiles at all obliquities or the Japanese 46cm (18.1") Type 91/1 AP projectiles used in WWII by the YAMATO Class battleships when hitting at under 45 degrees so that its thick dome-shaped Cap Head was still attached during the entire early penetration process on the plate face. These last would have no Cap Edge Effect under any circumstances where the edge was round for obliquity 50 degrees and up (only the Japanese shell nose changed shape at over 45 degrees and it was always a 69%-flat-faced tapered cone when the Cap Edge Effect occurred. This is gone into more detail below.
The Cap Edge Effect is separated from the shape of the cap in general because Dr. Allen Hershey of the US Naval Proving Ground, Dahlgren, Virginia, who headed their Ballistics Computation Group during and after WWII until armor development in the US Navy ended in 1955, came up with a method of handling various nose shapes that worked rather well, on the average, but it specifically excluded nose shapes with corners anywhere but at the very base of the nose where the nose and cylindrical sides meet (that means, for example, it could handle a full flat nose, but not a tapered flat nose where the flat area with the sharp corner was raised above the base of the nose and is narrower than the projectile diameter - my Tapered Flat Nose, for example, was excluded). Unfortunately, as noted in d), above, most AP caps had such corners of varying sharpness between the tip and the base, again because ballistic streamlining was not the most important design considerations with AP caps. What Dr. Hershey did was to create the Cap Edge Effect in several steps:
First, he ignored the sharp corner and came up with the best-fit smooth curve without a corner (other than at the base, if needed) that fit the cap shape (a kind of least-squares fit to the shape as a smooth curve as if it was from a set of data points running along the surface). He called this the Ogive Effect. He then used this shape in his ballistic system to come up with the approximate NBL for this shape as an added or subtracted value from the M79 AP projectile nose shape, which was completely known. What he did was turn all nose shapes, including the M79 nose shape, into the "Equivalent Elliptical Nose" (EEN, nose made of an ideal perfect round-tipped ellipse with the same ballistic properties as the real nose) using a formula that included both how long the nose height was (NH, the distance along the centerline from the joint with the upper edge of the cylindrical sides and the tip in the center of the nose) and how "fat" it was compared to a cylindrical nose (that is, compared to a full flat nose that merely extended the projectile side up to the tip of the projectile nose), with "fat" meaning its fraction of this maximum volume. A cylindrical nose of the same Nose Height of the projectile would have a volume of NH*PI*(D/2)2 so all tapering noses would have less volume (thinner). His EEN Height was twice the effective Nose Height of the equivalent ellipse, being the ratio of half of the long axis of the ellipse, "b" (he used "b*" or "b-star" to indicate this was not the true nose shape but the calculated EEN, which I changed to "bx" to prevent confusion with the * used in multiplication in computer programs), to half of the short axis, "a", where "a" is obviously always half-caliber in length (0.5-caliber), whereas the Nose Height value used the full projectile diameter (1 caliber). Using his formula, he computed the EEN for the M79 AP Shot projectile's tangent ogive 1.667 CRH nose (actual NH = 1.19 calibers; EEN Height = 3.68 calibers) and then used his formula for lengthening or shortening the actual nose height of the projectile being tested when its "fatness" was used to adjust it (the fatter it was, the shorter its EEN was compared to its actual nose height), with the change in the NBL being adjusted to be a fraction above or below that of the M79 AP projectile against the same plate. Again, he only allowed noses with smooth curves or straight sides, there could be no sharp corners above the base of the nose other than at the very tip (the EEN had no sharp point there either). He then plotted these fractional changes in NBL above and below the 1.00 standard used for the M79 AP projectile versus the projectile's EEN from actual ballistic tests and came up with an averaged curve set for different plate thicknesses and obliquities. Some of the data was scattered, but some was very consistent and smooth as the EEN changed. With nose shapes were a tangent or secant ogive, computation was straightforward, but with other shapes (unfortunately for him, most WWII US Navy blunt-nosed projectile nose shapes), tedious curve-fitting had to be done to get the best "minimum error in area" elliptical nose shape. (A "secant ogive" nose is a tangent ogive with the effective diameter of the projectile being increased beyond the actual diameter (the "swell" diameter in US Navy terminology), so the actual nose is only the upper end of a more sharply pointed "virtual" nose of this imaginary larger projectile whose center of arc is closer to the actual projectile's base, usually done to make the nose more streamlined (windscreens use such a more pointed nose shape in most projectiles) - there is a distinct crease where the nose and the cylindrical sides meet.) Incidentally, for a tangent ogive pointed nose shape, the Nose Height is computed from this simple formula:
For example, a hemisphere is a tangent ogive nose shape (the shortest possible) and it has a half-caliber (0.5) CRH, so its nose height is the square root of 0.5 minus 0.25, which is the square root of 0.25, which is 0.5, as expected.
Second, he measured the NBL for projectiles with sharp corners in their noses that had various EEN values, usually those with sharp-cornered AP caps. He discovered that for a given sharpness of the edge, when the Cap Edge Effect occurred, the amount of decrease in NBL was about the same for any plate thickness or obliquity. Most of his data was of course with the standard US Navy AP caps with a 50-degree face angle, as mentioned. Due to the tapering of the cap to follow the tapering of the nose in general, the inside angle at the corner was 133-135 degrees (it would have been 130 degrees - 50 degrees + 130 degrees = 180 degrees, a straight line - if the cap sides had been parallel to the projectile sides). He found a roughly constant drop of 12% below the NBL computed for the same EEN without the 133-135-degree cap edge corner when this effect occurred, so the latter is really the baseline NBL that the Cap Edge Effect is changing by this 12% drop. The NBL drop for those other noses with sharper or blunter corners will probably be different. Thus, if you use the default M79 nose, the final NBL value you get will be incorrect (the cap face EEN is wider at the top and thus shorter than the pointed M79 nose), which is why I allow the user to enter his own multiplier to this value as well as his own NBL, if he can find test results or computed results for a projectile nose without the corner that matches the general shape of the nose.
As mentioned above, Dr. Hershey also noted that the Cap Edge Effect had a MINIMUM plate thickness for every obliquity angle that allowed it. The minimum obliquity that allowed this effect for the 50-degree cap face angle of the standard US Navy WWII AP caps was 50 degrees and it required a plate at least 0.5-caliber thick of standard STS, while only 0.106 caliber of STS was needed at 80 degrees. What does this mean? No matter how sharp the corner of a wood plane or knife or razor, it cannot cut material that can move aside as fast as it is moving forward. Try to cut that thick plastic protective wrapping around many store items with a sharp knife tilted above a given "biting" angle, measured with right-angles to the material being zero. The knife just slides across the material like it was ice. Only when you angle the knife below the biting angle (closer to the vertical) will the material be pushed upward around the sharp edge and the knife cut the material - in effect, the material has to be moving SLOWER than the knife blade is to allow the knife edge to push into the material and cut it. The same goes for the corner of the Cap Edge Effect. When the projectile hits the plate at a velocity high enough to try to penetrate the plate at the given obliquity, part of the projectile velocity is pushing into the plate and part is pushing sideways parallel to the plate (all into the plate at right angles and none at 90 degrees, which is why I cut this off at 80 degrees). The sideways component is the one that cuts the notch in the plate face that drops the NBL by the fixed percent (-12% for the US AP caps, probably other values for other cap shapes with sharp edges). Below 50 degrees, the sideways component is at most about the same as the component into the plate so that the effect can be ignored under all conditions. At 50 degrees, the plate material that will remain stiff enough so that it does not bend around the 133-135-degrees corner at the edge of the US AP cap face must be at least 0.5 caliber, which means have virtually no "dishing". Dishing is widespread denting larger than the immediate hole area, usually the major method of resistance in thin plates that act like trampolines and stretch back in a roughly conical or trumpet-bell-shaped pit much wider than the hole is. The inner face of the dish can be thought of as the back of a wave moving away from the hole, with the steeper it is (narrower and steeper-sided, but less shallower, the dish), the slower the component of the projectile velocity in the direction parallel to the plate face need be to "catch up" with this retreating wave's back edge and begin to cut into it, causing the Cap Edge Effect. When the plate is 0.5-caliber or more, the NBL is so high at 50 degrees that the parallel component can reach this inner dish face and start the notch-cutting process to drop the NBL by 12%. At higher obliquity against thinner plates, at the NBL the projectile is moving so fast in the direction parallel to the plate face, with rather little velocity directed into the plate for penetration, that it can overcome even a rather thin plate's retreating wide dish/wave back and the corner of the cap face can begin to cut its notch, dropping the NBL by the 12% value. When the motion into the plate is about the same as the motion sideways, as at 45 degrees, then the effect is entirely gone, with it dropping to zero almost immediately below 50 degrees, for all intents and purposes, as Dr. Hershey's tables indicate.
What is Dr. Hershey's formulae set for creating the EEN of any nose shape without an upper corner or one where the corner can be tacked on later as an edge effect, just as with the AP Cap Edge Effect used in the program, after the EEN is computed? The general formula where all values are measured in calibers (eliminates any scale here) is
If you set α to 0, you get:
which is the equation for an ellipse (a solid 3D ellipsoid of rotation, actually). If you set α to 2, you get:
which is the equation for a sloped straight line (cone). A tangent ogive, with "R" as the Radius of Ogive in calibers, is a rather more complex version of this formula with
so, after some complicated manipulations, we get
which lies somewhere between α = 0 and α = 1, depending on the value of R. For example, if R = 1.667 (M79 nose), similar to many WWI projectile noses with R = 2, with and without caps on the nose, we get
From the formula for the Nose Height for a tangent ogive nose in calibers, we get
so the formulae work! Similarly, for a hemisphere with R = 0.5, we get
which are again correct since a hemisphere is an ellipse (α = 0) with a = b. The formulae work again! For a tangent ogive shape, the longer R, the closer to α = 1 the nose shape became, never quite getting there, though unless R became infinite, making the Nose Height infinite, too (no space in a projectile magazine for one of these!)
What about other values of α? If α = 1, we get
This gives a curve where 'z' changes rapidly near r = a and z = b, but 'z' changes very slowly when r << a. In effect, at the tip of the nose (r near 0), the curve is almost a straight line (cone) and near the base of the nose (r near a) it acts like an ellipse (or almost a tangent ogive) where the nose curve is going down vertically to just touch the tip of the vertical cylindrical side with no crease. It turns out that this nose shape, sharply pointed like a cone at the tip and curved steeply at the edge, is the optimum shape for any given Nose Height for maximum streamlining during flight through the air or when penetrating thick homogeneous, ductile armor at right angles (OB = 0). The longer the nose (the larger b is for a given a), the more streamlined yet. Making a long point using an AP cap and windscreen with this shape, prior to computer-controlled milling and metal-shaping machines, was rather expensive, so with a windscreen and AP cap forming a single long pointed arc, the cheapest way to get most of this shape is to use the secant ogive shape with a long radius, giving a more pointed shape (flatter side closer to a cone), accepting the small amount of turbulence caused by the crease at the joint where the nose base meets the top of the projectile's cylindrical sides. The British WWII APC using long-point "B" streamlined ballistic nose shapes (14" Mark VIIIB APC N.T. K, for example) had 6/10 CRH noses (AP cap side shape with the long windscreen continuing the curve to a small rounded tip), meaning that the Nose Height was only that of a 6 CRH tangent ogive nose (NH = 2.398 caliber), but actually used a 10 CRH (much flatter taper curve) radius (a secant ogive shape to reduce drag). For α = 1, the nose shape is the most pointed shape that does not have a crease at the nose/side joint. If you increase the value of α over 1 toward 2 (cone), the side of the nose gets straighter and straighter (the straight portion of the nose moves down from the tip toward the base) and a crease at the nose/side joint gets more and more pronounced until at α = 2, the crease is the abrupt fold from a straight cylindrical side to a straight conical tapered nose with no curvature whatsoever.
What about α less than zero or greater than 2? With the former, the elliptical nose shape becomes flattened in the center and more curved at the edges, getting fatter for a given Nose Height until below some large negative value for α the difference between a flat-nosed cylinder and the slightly smaller large-negative-α nose shape can be considered to be identical to all intents and purposes to the true flat nose - there is narrow curved gap near the edge of the face between the actual nose and the true full flat-nose shape. Note that this negative α value has to be rather large, since the penetration ability of a flat nose shape is very sensitive to even a small change in shape from a perfect flat nose. In fact, it may be more reliable to get test results for several near-flat-nose shapes and the true flat nose shape (I have two of these, including the true flat nose shape) and interpolate between them rather than try to use this α-based system for noses under about 0.5 caliber in Nose Height. For an α greater than 2 we get a similar situation in reverse, with the nose getting "fluted" (bent inward like a cone with the air being sucked out from inside and looking like a trumpet bell). The higher the value of α, the narrower the upper nose is until the nose ends up with a large-enough positive α as a thin needle of Nose Height length and a flat nose at the base with a tiny bump around the base of the needle being the only thing not perfectly flat. This needle can be ignored if it is thin enough and the nose then becomes a flat nose, but here at the base of the nose right at the joint with the cylindrical body's upper edge rather than at the top of the nose, as with the negative α values. Again, trying to approximate flat and near-flat nose shapes using the α-based system is probably not too useful when the nose gets below about 0.5 caliber in height or any height if the nose gets too fat (large-enough negative α, no matter which of the two extreme values of α you use, negative or positive.
Note that the α-based formula system does not address the nose shapes with sharp edges anywhere but at the base (α over 1, especially α over 2) or near the top (α with a large negative value), so many AP cap shapes are excluded. Hence the need for the Cap Edge Effect to overlay the α-based nose shape system to get the final effect on the NBL. This is what Dr. Hershey did, though it was tabulated only for the 50-degree face angle of the WWII US Navy standard cap shape.
To get the EEN, Dr. Hershey reasoned that for most nose shapes (pointed or oval) the existence of the pointed tip did not mean much in that a longer elliptical nose with no point could approximate a shorter ogival nose with a pointed tip. He took every projectile he had good data on as to its NBL and took the ratio of the projectile's NBL to the predicted M79 NBL (if the M79 NBL was supposed to be 1000 ft/sec under a set of test conditions for that projectile if it had an M79 nose, but it really had an NBL of 1100 ft/sec, then the ratio for that projectile was 1100/1000 = 1.1). This created a large table. He then had his personnel take each of the projectile blueprints for its nose contour and carefully create an elliptical shape (α = 0 formula above) that fit the real nose shape as closely as possible, adjusting the b/a ratio of the actual nose to an adjusted bˣ /a ratio get this "ideal" nose. This new value of bˣ/a = EEN. The M79 nose ended up with an EEN = 3.68 (true tangent ogive shapes were easy to re-contour with an ellipse). Most EEN were larger than the original nose shape, indicating they had a more streamlined shape than an ellipse of the same length, requiring a longer EEN replacement nose to compensate, but some of the blunter oval or even blunt-point US Navy projectile noses were not changed much or even had a shorter EEN than the real nose, indicating that they were less streamlined than an ellipse. When he plotted the NBL ratio values versus this hand-drawn EEN, most of the tangent ogive values formed rather smooth curves as EEN changed, though some of the more complex oval nose shapes gave NBL ratios that were scattered somewhat about the average curve for the tangent ogive noses of various lengths from 0.5 CRH (hemisphere) through 4.0 CRH (very long point). Realizing that the tedious manual graphing of each nose to get the EEN was going to be unrealistic in the future, he set about trying to find a formula using the projectile nose's α value that would roughly give the EEN without the manual graphing. After some work, he got
The trick now was easily finding the value of α for the nose of interest, but since he had a formula for each curve for each α, he could easily create a drawing of a set of closely-spaced α-contours that he could then slide a cut-out from a blueprint of the projectile nose of the same scale over until he got a reasonable "eyeball" fit without any re-graphing at all (like having those fixed wedding ring size samples and you just slide them on until you get the one that fits best). If the nose had an illegal corner between the base and the tip using this technique (almost always an AP cap shape), requiring the Cap Edge Effect to correct it (it could be used on projectile noses without caps if they were shaped with corners, too, of course), then the selected α-contour had to as well as possible bridge the region with the corner so that a smooth curve connects the nose shape on each side of the corner with minimum error, with the corner sticking out like a shark fin (not as easy, but they came up with some general rules on how to do this). NOTE: The use of this EEN technique for noses under 0.5-caliber Nose Height does not always give very good results since the effects of even small changes in nose shape are dramatic, but I have some good data here and hopefully can get some halfway reasonable interpolation scheme going.
I brought this up to show that the Cap Edge Effect is not just a "fudge factor" but an adjustment to a particular nose shape analysis system, here the α-formula system. I will be adding nose shape modifications to my next version of this program, if I can find enough information.
The following capabilities have been added to the original M79 AP Shot vs STS Penetration Program:
You can enter a US Navy Ballistic Limit (NBL) from another program or any other source for bare-nosed projectiles with other nose shapes to replace the M79 computation, if you wish to do so. This will correct the NBL baseline used when the nose shape is not the M79 shape, such as with an AP cap that is undamaged during the penetration. The Base-Through Penetration computation is ignored in this case due to lack of any data on this variation using another nose shape.
You can enter a windscreen as a percentage of the total weight and the program will calculate the change in the NBL due to that windscreen using the WWII 105-pound US Navy 6" Mk 27 MOD 7 Special Common projectile with a long, pointed 5.1% windscreen as the baseline data-base. Usually, the weight of the windscreen on the projectile of interest gives a linear multiplier factor from that baseline: Automatic Multiplier = (WS WT %)/5.1. In addition, the value computed can be further adjusted by an optional user-entered multiplier factor if the windscreen is not a simple, unmodified, one-piece pointed cone or ogive. Regular solid sheet steel windscreens, with a user-entered modifier of 0.75 and up, I estimate are strong enough to remain more-or-less intact on hitting plates under 0.03-caliber thick (any kind of homogeneous, ductile iron or steel); I use a step-function here. Since the windscreen is intact and is merely acting like the new nose of the projectile (no logic to change the M79 computations in the program, though), there can be no Hood or AP Cap effect, either. Several atypical windscreen designs are described in the program's screen printouts with estimates for user-entered modifiers for them:
US Navy WWII cruiser and battleship AP projectile windscreens had lightly-covered holes cut in them (large triangular holes near the tip and many small round holes near the base) to cause water from an ocean hit to ram through the windscreen interior where a colored dye bag is placed to allow the splash from the shells fired from one ship to be separated from those fired at the same target by another, at least during the day. US Navy WWII tests showed that the Windscreen Effect on the NBL for the US Navy WWII 6" Mk 35 MOD 5 AP projectile, which had a windscreen that was much shorter and lighter in weight (the AP cap filled in most of the lower pointed nose region) and also had the dye-bag holes cut in it, was only *half* that predicted from the US Navy WWII 6" Mk 27 MOD 7 Special Common projectile data due to the holes allowing the windscreen to fold inward and be torn open much more easily than a solid sheet-metal cone would as it was crushed downward from the tip by a steel-plate impact and then split apart by the projectile nose pushing through it and into the plate (the holes acted just like the perforations cut into a stamp to make it tear out easily). I cut the 0.03-caliber thickness needed to remain intact to only 0.015-caliber for these windscreens. I use the range of 0.4-0.749 as the minimum user-entered multiplier for this change in intact maximum thickness; below this and the windscreen is so weak that it will never remain intact, as far as I know. Windscreens for other US Navy projectiles without holes more-or-less matched the 6" Mk 27 MOD 7 data with only the automatic adjustment due to weight differences.
The WWII German brittle break-away aluminum windscreens were to prevent any possible AP cap separation from the nose on an oblique impact prior to the AP cap itself hitting the armor plate. Krupp must have made its earlier windscreens too strong and they sometimes tore off at the solder joint between the AP cap and the projectile nose instead of where the windscreen was screwed onto the edge of the cap - "over-engineered" - and Krupp compensated in its last WWII APC projectile design, the "20,3cm, 28cm, 38cm, & 40,6cm Psgr.m.K. L/4,4", which had long pointed windscreens for maximum range capability, by going as far as possible in the opposite direction, since this was not a significant problem with any other nation's projectiles, to my knowledge (cap attachment problems were invariably due to a quality control or design problem at the manufacturer, not anything to do with the windscreen). Since aluminum is roughly one-third as strong as steel of equal thickness, I assume a user-entered multiplier of 0.33 here. All other German windscreens were of regular one-piece sheet-steel design.
The Japanese weak-attachment-thread design was to allow the windscreen to tear off on ocean impact, especially impacts at over 45 degrees (it tore off on any impact whatsoever against anything even slightly solid, to my knowledge; probably even a tree), also knocking off the upper end of the AP cap or, in the smaller uncapped AP shells, nose itself - termed the "cap head", which was only held on by the windscreen threads at the joint between the cap head and the AP cap/projectile nose. I assume a step function drop to 0.1 for the user-entered modifier for over 45 degrees as the windscreen tears off at the threads and the cap head ricochets off of the plate, taking the windscreen with it, both then separating in the air after the impact - they can stay close to each-other for some distance, as shown by a hit on the superstructure of SOUTH DAKOTA from a 20.3cm Type 91 AP projectile that had ricocheted off of the ocean and punched a "Mickey Mouse" head in the thin superstructure plating as the windscreen hit broadside and tore a triangular hole through the plate, while the projectile body and cap head hit next to it and made two round holes that look just like the ears on the triangular face, hence the name I gave the damage. The windscreen and cap head would be knocked sideways away from path of the projectile itself at over 45 degrees, dropping the windscreen effect immediately to only a small value, as noted in the program printout. This instantly made the front of the projectile into a tapered flat-nose projectile with a flat area almost exactly half of the cross-sectional area of the projectile body at its widest point just below the base of the nose (the flat tip was thus circa 0.69-caliber in diameter). Such a nose would pass into the water - "dive" - with minimal chance of ricochet (it took ocean impacts of over about 7 degrees angle of fall to dive into the water rather than ricochet most of the time, compared to over about 12 degrees for a pointed nose with an intact windscreen). The tapered nose also was shaped much like the shell's "boat-tailed" (tapered lower body with a flat base to minimize base vacuum drag and get the best possible streamlining and gun range) so the post-impact projectile looked similar at either end and it had little tendency to yaw sideways since there was only a very small area of angled nose region between the flat end and the cylindrical body for any water turbulence to push against. Pointed projectiles tended to tumble and slow down rapidly after only a short underwater travel due to the leverage on the pointed nose by the water-born forces caused by the projectile forcing the water out of its way - in fact, in many cases they ended up going flat-base-first like a tear-drop (modern submarine hull). Due to the small increase in water pressure on the lower edge of the flat nose compared to the upper edge, the nose would tend to be pushed upward slightly and move in a much flatter trajectory underwater and not dive as deep as it would if it just continued on a straight line underwater at the same angle of fall - this roughly straight-line motion began when the angle of fall was 30 degrees or more, with the trajectory being significantly flattened and raised toward being parallel to the ocean surface even at 25 degrees angle of fall. One can think of these flat-nosed shells as "pre-tumbled" so they already seem to be going flat-base-first as far as water drag can tell. The spinning projectile could thus, under the optimum angle of fall condition of circa 17 degrees, keep going more-or-less nose first dropping only very slowly downward with minimal drag underwater until it reached circa 200 calibers of travel, at which point it would have slowed down to under about 100 ft/sec (about 30 m/sec) at which point it would curve downward steeply and sink (if it had not hit the target or its fuze had not already blown it up by them, of course). By using a very long delay-action fuze set off on the water surface by impact shock - 0.2-second delay in the original cap-head-equipped late-1920's Type 88 AP design, 0.08-second in the smallest WWII 15.5cm Type 91 AP design, and 0.4-second in the larger WWII 20.3cm, 35.6cm, 41cm, and 46cm Type 91 projectiles - most of this distance would be traveled prior to the shell detonating, so it could hit the enemy ship underwater even if it hit rather far short of the enemy ship at a close-enough range that the shell hit at a rather shallow angle of fall (between 15 degrees and 25 degrees was considered best) so that it did not dive too deep at the start and had some chance of hitting the target's lower hull anyway. This design worked a couple of times during WWII, most spectacularly on the US Navy light cruiser USS BOISE, which took a "perfect" (by Japanese Type 91 AP projectile design standards) 20.3cm Type 91 AP projectile deep underwater hit in its forward magazines that destroyed all of the ship's forward gun turrets by fire - though the water spraying in through the hole and the use by the US Navy of the slow-burning pure-nitrocellulose gun propellant, unlike the Japanese, who used a more powerful, but much more dangerous if set off in a magazine (as several British battle-cruisers blowing up in battle showed), British-"Cordite"-type gun propellant of mixed nitrocellulose and nitroglycerine, kept the magazine from blowing up. The shell could also act like a small mine if it blew up underwater near the hull, which also actually happened at least once during WWII. During WWII, a small change was made in the windscreen to make it slightly more pointed and streamlined and also put a colored dye bag inside for identifying the ship firing a shell which missed, as explained with the US WWII AP projectiles, above. These modified Type 91 shells were re-named Type 1 AP projectiles. Only a portion of the battleship-size 35.6-46cm capped shells were altered to Type 1 and there were still many unmodified Type 91 shells in storage (as the US Navy found after WWII) - none of the cruiser-size shells had this small change, to my knowledge. In this case, no holes were needed, as the windscreen was knocked off by any water impact and also tore off of the cap head, exposing the dye bag directly to the water immediately. Thus, they got the colored splash effect for free.
The French made the most elaborate windscreens ever used after WWI for their 33cm and 38cm APC projectiles used in the DUNKERQUE and RICHELIEU Class battleships. Previous design studies during the 1920's showed the good underwater effects of the tapered-flat-nose, which the Japanese achieved, as described above, by literally slicing off the end of their projectiles - the caps in the larger sizes and the actual noses in the smaller sizes - when they hit at high obliquity to make such a shape (I do not know if the Japanese ever knew about these French tests or if these tests influenced the design of the Japanese "diving" AP projectiles). The French, who used an unusual, thickened, form-fitted hardened AP cap that was of rather constant thickness over much of the projectile nose, could not slice off the cap tip without slicing off the tip of the projectile body, too, which would compromise armor penetration ability. Instead, they employed an internal, heavily-reinforced, flat-faced metal "floor" in the base of their windscreens that was braced directly to the face of AP cap under the windscreen, bracing the windscreen internally, with a significant gap between this flat windscreen floor and the face of the cap, so that the this floor was narrower than projectile itself, as with the Japanese Type 88/91/1 front end without the cap head. This floor was permanently attached to the AP cap, I think, and the very-long-point upper windscreen screwed to it (making these shells over 5 calibers long, the longest and most streamlined of any regular, as opposed to special-purpose like super-long-range coast defense gun, battleship-size APC shells ever used). The long windscreen was made in two parts welded together, each about half of the windscreen in length (the lower portion a tapering tube and the upper portion a long cone shape, with a decided "kink" in the windscreen side where they met). This long windscreen was only possible due to the reinforced internal flat base-floor supporting it (a regular simple threading of the windscreen lower edge to the cap upper face would not have been strong enough). Thus, the tapered-flat-nose shape for underwater trajectory also allowed a more streamlined shell (the base of the shell was of a unique curved taper shape with a very small flat portion, so the shells could not be stood on their base easily, adding to the minimal drag design in a rather extreme manner and requiring horizontal storage and movement in the shell room). Since the windscreen was so solidly held to the AP cap, how did the reinforced flat base of the windscreen get exposed to the water on impact for diving purposes without any weakening of the windscreen possible here? No problem. The French also pioneered the use of the dye bag in the windscreen to mark which ship hit where when more than one ship was firing at the same target (not really useful to them, since they had so few ships, but they must of had high hopes of eventually having a big fleet when this shell was designed in the 1930's). The US and Japanese and original French designs merely colored the splash and could only be seen during daylight when the shell missed - the dye was invisible at night or, even worse, INVISIBLE IF YOU GOT A DIRECT HIT (unless there was a big explosion or fire on the target), which is obviously something you greatly desired to know! Not to worry!! Just reinforce the windscreen and screw the instantaneous impact nose fuze from an HE shell ("High Capacity" (HC) shell in both French and US Navy terminology) and the big booster charge from the same HE shell (normally used to magnify the fuze's detonator blast to ensure that the main filler charge of the HE shell would detonate properly, but here used by itself) on the tip of the windscreen, with the dye bag inside (supported inside in a special sheet-metal "can" up near the upper end of the windscreen rather than just sitting at the bottom in a paper bag as in the US and Japanese designs - the dye bag was rather light in weight). When the projectile hit ANYTHING, the sensitive nose fuze would detonate the booster and blow the upper end of the windscreen off, and the water ramming down the length of the windscreen through the huge hole in the top end would tear what is left of the upper windscreen off of the flat reinforced base floor. The floor was strong enough to remain in place when a water impact occurred, giving the desired tapered flat upper end for underwater travel, just as described with the Japanese shells. I assume that this floor retains about one-third of the mass of the windscreen and is thus able to resist crushing against an armor plate by that much, so I assume a user-entered modifier of 0.33 for this windscreen. A long delay action of some sort was needed here during underwater travel: Either by the Japanese method of just using a much larger black powder pellet in the delay section of the fuze (which could make the shell on a direct hit merely make a cannon-ball-like hole through the ship without exploding until it had exited the far side, a significant defect in this concept, as was shown in several Japanese AP shell hits on US ships during WWII that just made small holes with little effect) or by some sort of method of keeping the fuze from activating completely until after it had hit the ship target itself. One method is to have a fuze that only finishes arming and sets itself off when the deceleration stops. A simple idea is to have a floating firing pin weight that compresses a spring as it is slammed forward on impact, arming the fuze and waiting for the deceleration to stop to allow the spring to throw the firing pin into the primer. Interestingly, this design actually was made and issued as the aborted US Navy Mk 11 Base Detonating Fuze (BDF) around 1933, but was found to be unreliable when mass-produced at the start of WWII and declared unserviceable in 1941, only to be later modified (new delay element) and reissued for some US Navy cruiser-sized AP and Common shells during late WWII when the replacement fixed-0.033-second-delay, water-impact-sensitive, reliable-on-large-obliquity-impacts Mk 21 BDF fuzes, used in ALL other US Navy AP shells in WWII, were not being produced fast enough. This spring-action-fired design means that the shell has gone through the water, then the outer hull or armor on the target, and is now finally in empty space inside the target, where you want it to start its regular delay (all the prior delay was undefined pre-operation, not specific post-operation using the black powder pellet to allow deep travel into the target); if reliable and not sacrificing any other important function, this would be the best way to go. Another method is a base fuze that is made much less sensitive to impact and only is set off on hitting a heavy steel plate, not on rather "soft" water, just like some post-WWI anti-submarine shells were, though this means that the shell may be a dud if it hits thin plate, even light armor that would set off any other AP shell base fuze. I have no idea what design the French base fuzes used to enhance underwater travel, assuming that this was actually part of the French fuzes in use in the start of WWII (perhaps the extra-long-delay design was "for future use" and never was actually made). Note another improvement in the HE-tipped dye bag windscreen design, ignoring underwater action altogether: For the first time, a colored blast and puff of smoke would be visible at night or if you got a direct hit, solving those problems, too. The French seemed to want a "one size fits all" windscreen design that solved everything in one tidy bundle. They called this windscreen design with HE and fuze the "K" shell alteration (not sure what "K" is the abbreviation for).
The British learned of the French "K" design for windscreens with explosive dye bag operation after the Battle of Dunkerque, when they were pushed out of France and back to Britain. Since they had a large fleet of battleships with similar-size guns (KING GEORGE V Class, QUEEN ELIZABETH Class, REPULSE, RENOWN, HOOD, etc.), this shell identification thing could be a problem, which I think turned out to be true at least for part of the battle with BISMARCK a short time later. The British decided to adopt the French concept, name and all, but not to make any modifications to their existing windscreens other than beef them up to support the nose fuze and booster at the windscreen tip and add the dye bag just like the US and Japanese did, lying in a paper bag at the base of the windscreen, where it was pulverized on firing the gun and spread around the inside surface of the windscreen during the flight as the projectile rapidly spun. They also adopted a two-part windscreen welded in the middle for added strength and used a standard British Navy "Direct Action" (US Navy terminology "Point Detonating") instantaneous HE shell nose fuze and booster. Since the reinforcement was minimal and the British windscreens much shorter and lighter than the huge French windscreens, the blast of the HE booster would tear them almost completely away, with a water impact removing what was left, so no part of the windscreens remained if the shells hit water prior to their targets. If one of these shells hit a steel plate and exploded, some ragged portion of the lower windscreen will probably still be hanging on and that is why I do not completely remove all windscreen effects from British "K" shells used late in WWII and after when suggesting a user-entered modifier of only 0.05.
You can enter a Hood (soldered-on, thin mild steel, form-fitting nose covering with a thickened region at its lower edge where threads were cut into it to hold the windscreen) as a percentage of the total weight. It is also based on the 6" Mk 27 MOD 7 Special Common projectile; here its 5.2% Hood effects. In this case there is no manual adjustment, just a linear automatic adjustment due to changes in the Hood weight percentage entered, just like the built-in windscreen percent weight adjustment. Hoods were created to allow windscreens of uncapped base-fuzed Common projectiles (Semi-Armor Piercing or SAP in US Army terminology) to be screwed on without cutting threads into the nose of the shell itself, as had been done to older, blunt-nosed shells just after WWI to make them fly farther with minimal cost, with somewhat negative results as to penetration ability due to the thread grooves causing the nose to crack on impact and sometimes to break up the entire shell prematurely. Thus, Hoods virtually always were accompanied by a windscreen and for most modern WWII-era uncapped AP/SAP shells, vice-versa - only some rapid-fire cannon shells with windscreens, such as the final WWII US Army and Navy 40mm M81A1 AP projectile, and the 15.5cm and 20.3cm uncapped versions of the Japanese WWII Type 91 AP projectile design, did not use a Hood (ironically, the earlier 40mm M81 AP shell did use a Hood, but cost too much when mass produced during WWII). It is possible for a windscreen to be knocked off by a very thin plate that allows the Hood to stay attached to the projectile nose (they would be knocked off as easily as my Type 1 AP caps; see below), meaning only the Hood by itself is in place on the next plate impact, but this would obviously be rare, of course.
You can enter a default hard AP Cap weight percentage, with or without a windscreen, too. In this case, there are two different AP cap designs to choose from with different NBL changes versus plate thickness and obliquity. These caps, typical of US Navy WWII designs, have a conical face with a 100° tip angle (that is, 50° measured on each side of the centerline), with a rounded tip, though several variations, with such things as raised flat tips (French term "mèplat" is used for such tip on AP caps) were also used, though I know of no real difference in effectiveness due to such variations. You can also select if you want the caps to be soft/tough (I lump these together against STS plate) instead of the default hard type, which changes some of the computations. Both cap designs, one from the US Army and one from the US Navy, both in use early in WWII, have somewhat different automatic adjustments due to changes in percentage weight from the default values. These two caps are based on the 130-pound US Navy WWII 6" Mk 35 MOD 5 AP projectile's 20% cap and the 15-pound WWII US Army 76mm M62 APC projectile's 13.9% cap (both also had windscreens). The first is one of my Type 1 caps that are knocked off by an impact against any iron or steel plate 0.0805-caliber thickness or more and the second is my Type 2 cap, which take about twice the thickness at 0.1605-caliber to knock it off. These projectiles have different values for their effects on penetration when they suffer damage (shatter) or shift position (tilt) during penetration. They both also have a -12% drop in projectile NBL due to their shape with a sharp edge where the cap face and cap side join. There is much more discussion on about these things below.
See the APPENDIX ON STEEL METALLURGY for details on armor properties to see why AP caps were developed.
The complex effects of AP caps:
Cap attachment methods, to my knowledge, were as follows:
The original method used by the US-designed soft "Johnson Cap" (a small cylindrical "pill box" form-fitted to the tip of the projectile nose with a very-blunt-cone-shaped face, made of mild steel), which was popular circa 1900 in several countries, was to cut a groove-ring into the inner face of the skirt (the bottom portion of the cap with the pit that fit the projectile's nose tightly) and a matching groove of equal shape into the upper end of the projectile nose, so that when the cap was fitted, the two grooves lined up to form a ring-shaped tunnel wrapped around the projectile nose tip. A hole of equal diameter was bored into the cap skirt from the outside so that it just pierced into that tunnel tangentially and a thick, stiff wire of tube width was forced into that hole so that the wire wrapped entirely around the nose, filling the tube completely. The end of the wire was then cut off flush with the outside of the skirt. This wire locked the cap to the nose mechanically, as it acted like a raised ridge projecting from the nose of the projectile into the cap.
Later designs dispensed with the wire and actually formed raised ridges into the nose itself, with the cap skirt forced down over them and snapping back to its standard shape when a groove cut into the inner face of the skirt was filled with the nose ridge. A number of variation of this were used where shallow grooves and/ridges were shaped into both the projectile nose and inner skirt face, sometimes shaped like interlocking hooks in cross-section, usually near the lower edge of the skirt, and the cap pressed down and inward by a mechanical press so that the ridges interlocked, holding the cap. These ridges in the projectile nose were found during more stressful, oblique-impact tests with hardened caps in Britain after WWI to cause the projectile nose to break up ("cut on the dotted line effect", just like those threads originally cut into uncapped SAP projectiles to hold add-on windscreens), so they were replaced with a ring of equally-spaced shallow pits ground into the nose just above the lower edge of the AP cap skirt, into which the soft cap material at this edge was crimped, with no sharp edges or connected ridges or grooves completely ringing the nose to weaken it. These turned out to be successful. Some British and all US AP projectiles during WWI already used this ring of pits and this method of attaching all of their caps was retained through the end of WWII, even though it was largely replaced for strength purposes by the use of solder covering the entire skirt inner face in the period between WWI and WWII for both AP caps and Hoods (it was considered a reinforcement of the solder "just in case"). The French retained the shallow groove ringing the nose mechanical attachment method, where the lower edge of the skirt was crimped entirely around its perimeter, during and after WWI, but the groove was made as shallow as possible and, since their rather thin, contoured AP caps covered almost the entire nose anyway, moved down to almost the edge of the base of the nose. I do not know when the French adopted solder in addition to this method of cap attachment but they, like the British and US, kept it even after soldering became the primary attachment method.
The use of solder ("sweating") to attach caps, either as an add-on to the mechanical methods such as the ring of shallow pits in to which the lower edge of the cap is crimped (see b, above) or as the sole method used, seems to have started in either Austria-Hungary at its Skoda projectile design and manufacturing company or at Krupp in Germany. At least these are the two that first used it, to my knowledge. Soldering is by far the strongest and most reliable cap attachment method and does not involve any modifications to the projectile nose that might even slightly reduce the projectile's crack-resistance on armor impact. These two companies completely replaced all other AP cap attachment methods in their APC projectiles circa 1900-1906 (not sure of the year), since blueprints of shells designed during 1906 show no crimping or other mechanical attachment points in the nose or cap. The Krupp soldering process used an extra-strong high-temperature solder that had to be carefully applied if it was not to compromise the hardening used in the projectile nose, but they seemed to have been experts and this was not a problem. I do not know what kind of solder Skoda used, but it may have been the same kind, since these two companies seemed to trade information on some topics rather a lot (at least some of their projectile designs had very similar design features). This kind of solder was also used in WWII in some US Army anti-tank projectiles (the companies used it for their normal commercial work and simply kept using it to make AP projectiles, too, even though they were well above the minimum specification for "grip" of the cap to the projectile nose), but other than that, most caps made by all other manufacturers were soldered on using a somewhat weaker, but much more easily applied, low-temperature solder that seems to have been quite adequate for this purpose. The Japanese came up with an improved low-temperature solder circa 1930 that the US Navy decided was worth developing after finding out about it after WWII, though I do not know if it was used in the US for its AP caps. It had improved strength without the problem of heat on the hard projectile nose, though it was not as high as the strength of the Krupp solder (another case of " over-engineering" or "solving a problem which did not exist"), which Krupp used until the end of WWII, to my knowledge. British APC projectiles did not adopt soldering until well after WWI (their circa-1920 APC projectiles only used the mechanical crimping into the shallow-pit method), though the US seems to have done so earlier, perhaps during WWI by some manufacturers. Not sure of the exact dates where solder was adopted by Britain and the US. The Japanese switched to solder-only circa 1930 when they introduced their Type 91 AP projectiles to replace their earlier British-designed "Mark 5" cruiser and battleship AP projectiles and the altered Mark 5 introduced in 1928 called the Mark 6 or, later, Type 88 AP projectile with the underwater hit fuze and AP cap and windscreen modifications (those earlier shells used only the mechanical shallow-pit attachment method, as did all British designs of the period). The low-temperature solder made decapping somewhat easier, which I call my Type 1 AP cap, and the extra-high-strength Krupp and occasional WWII US anti-tank shell manufacturing solder made it more difficult, which I call my Type 2 cap, both of which were discussed previously.
One slightly bizarre method of attaching an AP cap to a projectile was employed by Krupp near the end of WWII for its last few manufactured lots of 40.6cm (16") APC projectiles for its Coast Defense guns. Due to lack of an ability to get enough raw materials from abroad during WWII (what with all of its borders outside of Europe held by enemies who prevented many German cargo ships from sailing outside Europe itself), Germany began to have shortages of several important materials needed to manufacture weapons, tanks, planes, etc. Some of them were metal alloying elements used for such things as solder. Therefore, to reduce the use of solder and other such metals to the absolute minimum possible, a form of extra-strong rubber cement was developed by Krupp and used to attach the AP caps in these last few large-caliber projectiles. Tests after WWII by the US Navy showed that the rubber cement, while not quite as strong as regular low-temperature solder used by the US (see c, above), was quite adequate for this purpose during shell handling and gun firing. This might not have worked as well if the caps had been of the WWI US and British soft variety, rather than the hardened type introduced into new APC projectiles by everyone after WWI (most old APC shells, even those extensively remanufactured after WWI, were not changed as to their AP caps, including all US Navy battleship-size AP projectiles, which retained the WWI Midvale Unbreakable soft-capped design until replaced by new, modern shells in the late 1930s - except for the couple of old ships still using the 12" (30.5cm) size guns, which never had theirs replaced), since the rubber cement might have interfered with the shockwave propagation from the nose to the cap on nose impact with the hard face of a face-hardened plate. Hard caps destroy the plate face on impact and the projectiles rely much less on this later-on nose-to-cap shock transmission to do their job. It is ironic that this shows that even Krupp finally agreed that their use of extra-high-strength high-temperature solder, with its added work needed to preclude projectile nose hardness problems, was an unneeded extra complication, as everyone else had known since shortly after the end of WWI.
There were three basic types of caps used to defeat face-hardened armor (and later used with some of the small, high-velocity anti-tank projectiles to help defeat very thick - to the projectile - homogeneous armor) where impact shock was so great that the projectile nose would split open and the projectile break apart due to the impact-shockwave-induced projectile damage called "shatter", greatly reducing it penetration ability at low obliquity. Each cap type was developed to improve on the previous one, in this order: Soft, tough, and hard. The following discussion is what each type accomplished in improving the projectile's ability to defeat the armor it was designed to be used against.
How did AP caps improve things for the projectile when used against face-hardened armor (its original purpose)? When a hard-nosed projectile runs into hard-faced plate at right angles, the contact point on the projectile stops moving (the plate and nose may flex slightly, but not much). The rest of the projectile keeps on going, creating an enormous shockwave at the impact point as the force on that spot goes almost straight up in strength. It keeps going up until (1) the projectile completely stops, remaining intact, with or without damage to the plate in the process; (2) the projectile nose shatters into pieces as the reflected shockwave on the sides of the nose spalls off chunks of nose material, rapidly causing the nose to crack to pieces and violently fly sideways in all directions, usually followed by the destruction of the rest of the projectile as it flattens itself on the surface, thus relieving the face layer of much of the force on it; or (3) the face layer cracks through either directly (requires maximum force) or after the shockwave reflects from the plate back surface (requires less force as the original shockwave was a wave of compression toward the plate back and the reflected shockwave is now a wave of pulling ("rarefaction") also toward the plate back, so both shockwaves assist the projectile in the penetration by stressing the hard face in the same direction), throwing the armor in chunks out the plate back, after which the projectile, broken or intact, pushes into the hole and attempts to widen it to make it completely through, which may or may not be possible with the remaining energy of the projectile. As noted, the shockwave in the plate radiating out from the impact site is matched by an equal and opposite shockwave in the projectile nose. The shockwave in the plate can move in all directions as an expanding hemisphere until it hits the plate's back surface, where it reflects, possibly spalling off chunks of the plate and reducing its resistance as the shockwave moves back toward the plate face from behind. This takes time, though. The shockwave in the projectile can move directly down the projectile length with no interference until it hits the upper end of the explosive-filled internal cavity, which, if curved properly, can bend the shockwave around the end with minimal reflection, after which the shockwave only reflects when it hits the projectile's base. This takes a very long time, given the length of most projectiles (not long with a cannon ball, of course). However, the portion of the shockwave in the projectile nose spreading sideways had nowhere to go, as it hits the sides of the nose immediately when it is at maximum power. Since the impact we are discussing here is thought to be strong enough to punch through a thick armor plate, the shockwave hitting the sides of the projectile nose is of that power too. Note that the metal of the projectile is not much different from the metal of the plate, so "what is good for the goose is good for the gander" and the reflecting shockwave on the sides of the nose blows out the sides of the nose like an exploding balloon when popped, rapidly taking the rest of the nose with it, and giving the plate the advantage from then on (the penetration velocity for the projectile in this condition goes way up - 20-40% in KC-type armors at right angles impact, usually close to 30%). I call this Primary Shatter, which is caused by a shockwave. If for some reason the projectile nose survives the Primary Shatter attempt, then we get the situation in (3), where the intact (roughly, at least not shattered!) projectile nose is bearing down with more and more pressure on the impact point as the inertia of more and more of the projectile piles up on the stopped projectile nose tip. In the older face-hardened plates, they were somewhat brittle (with a couple of exceptions) and the reflected shockwave from the back of the plate is "the straw that breaks the camel's back" causing the face to fail first (again assuming we are hitting with enough energy to penetrate), punching out a hole and, hopefully, allowing the projectile to finish opening the hole and penetrating through the plate. If the plate is so tough that it still will not have its face break after the reflected shockwave reaches the face surface where the projectile nose is still bearing down on the plate, either due to our impact not being with enough energy (which we have assumed not to be true in this case) or due to a superior, extra-tough plate type (only a couple of WWI-era plates, but most post-1930 face-hardened armor types), then we have another shatter problem with the projectile nose, only this time a more gradual increase in force until either the face of the plate finally gives under direct impact force and penetration continues, and/or the hard projectile nose suddenly collapses, which is in effect also a kind of shatter, but without a sharp shockwave as the cause. I call this compression-induced projectile nose damage Secondary Shatter, caused by a force that more-or-less gradually (as compared to a shockwave, that is) increases until it exceeds the strength of the projectile nose or body, causing it to suddenly collapse, like a soda straw pushed from both ends suddenly folds up, ending its resistance to being compressed. Secondary Shatter is more likely to result in only the nose shattering, with the softer lower body in some cases remaining in one piece and penetrating the plate anyway, assuming that the plate face collapsed under the increasing force, too, before the projectile body collapses, which would occur shortly after the nose collapsed if nothing is done to relieve the force on the middle body between the projectile base and the hard face.
There are only two ways to punch through a face-hardened plate with a hard-nosed AP projectile at right angles: (1) Punch the plate so hard that the direct pressure or direct pressure plus reflecting shockwave cracks the face layer entirely through, so the portion of it cracked out under the projectile nose can be pushed out of the plate back, tearing through the backing and forming a trumpet-bell-shaped hole, more-or-less cylindrical in the face layer and widening toward the back surface in the back layer. If the initial punched hole is too narrow, the projectile must crush out a wider hole in the face to allow penetration, throwing more pieces out the back, too. The entire mass punched out of the back has to be accelerated to at least the Remaining Velocity of the projectile, which soaks up a lot of projectile energy that no longer can be used to widen the hole to let the projectile through, reducing the chance of a complete penetration. Even a shattered projectile can do this, though it requires significantly more energy to do so - typically, the striking velocity must go up by 30% at right angles to allow penetration of KC-type armor with a shattered projectile, nose or complete shatter makes no difference (though this can vary from 20% to 40% under various conditions), meaning the energy goes up by 1.3 x 1.3 = 1.69 or that 69% more energy is needed, on the average. Much of this is due to the hole in the plate needing to be made bigger to get most of the pieces through (I assume the velocity where 80% of the body weight goes through the plate when the projectile breaks up is the NBL). Primary or Secondary Shatter makes no difference. (2) Gouge a pit in the plate face directly under the nose so that pieces are thrown sideways out of the plate front surface and then push through what is left, punching it out the back, as with (1), above. Optimally, the gouge should just destroy the entire face layer in front of the projectile, with only the soft back left to be pushed backwards like a thinner homogeneous plate. Without the face layer, projectile damage of any kind is reduced too. Most impacts cause some shallow flaking of the surface of the brittle face around the impact, but this material is not directly in front of the projectile, so it does not help in the penetration. Method (2) is by far the most efficient, when it can be accomplished.
AP caps are designed to cause the nose to remain in one piece after impact with armor (any kind of armor). Where they are needed, they provide an important benefit, but when they are not needed, they just interfere with the penetration ability compared to the same projectiles without them (an arctic coat is good in the arctic, but it would be a definite negative in the Sahara Desert!). The three kinds of AP caps have the following properties (from what I can figure out):
Soft caps. They are made of mild steel. These merely flatten out on the plate face, forming a doughnut of soft steel around the nose so that that the sideways shockwave, formed when the true projectile nose hits the hard surface of a face-hardened plate, has somewhere to go out of the projectile itself. It then explodes the cap material outward sideways, but now the projectile's actual nose is still intact and can continue to press against the impact point, aided by the reflecting shockwave in the plate. Assuming that the impact energy is enough to penetrate the plate with an intact nose, then older, more brittle face-hardened plate face layers usually cannot resist the stress and fail, being pushed out the plate back as mentioned in o, above. Thus, soft caps stop Primary Shatter when they work. Tougher face hardened armors - most post-1930 new plate types (not Japanese VH nor the version of KC n/A used in the turret faces of the German Pocket Battleships, though it does include the later form of KC n/A used in the WWII German battleships) and pre-WWI US MNC and Austro-Hungarian Witkowitz KC - are so tough that even with the nose pressing on the plate face, they will not break, so a soft-capped projectile will usually still fail by Secondary Shatter, though it has a good chance of remaining intact otherwise, having nose-only shatter, but only if it penetrates the plate anyway. Failure to penetrate results in complete shatter when Secondary Shatter sets in, just as always occurs, with or without penetration, with Primary Shatter. Against the older, more brittle face-hardened armors, Hoods act just like soft caps do with the tougher later face-hardened armors, allowing nose-only shatter if the shattered projectile completely penetrates, but complete shatter otherwise. This makes some sense, since Hoods are thin caps with limited shockwave absorption capacity, but enough to limit damage if the plate is brittle and if the plate fails even when the projectile nose shatters. Soft caps and Hoods do not work at all above 20° obliquity and have only a 50% chance in the 15-20° range - they pull partially off of the nose as the cap/Hood face twists to fit the tilted plate face, preventing them from absorbing the shockwave energy in places where there is an air gap, so the nose shatters from those spots just like the cap did not exist. Hoods have no effect whatsoever as to protection of projectiles against the tougher face-hardened armors that shatter regular soft-capped projectiles. Against homogeneous plate, which is harder than they are, they deform just like hitting face-hardened armor and will shatter at a rather low homogeneous plate thickness, degrading penetration in much the same way as the hardened cap do above their plate shatter thickness.
Tough caps. These are soft caps, but made with what looks like almost armor-class homogeneous, ductile steel with high quantities of nickel (Skoda blueprints for 1908 APC projectiles show a cap with 25% nickel content, so they were using tough caps, as were the Russians, prior to and during WWI). Tough caps seem to work at all obliquities, at least up through about 30° obliquity, just like hard caps, as tests by the British of Russian 12" (30.5cm) APC projectiles with these caps had no nose shatter problems whatsoever at 20° obliquity. The final 1911 versions of German Krupp WWI-era AP caps ("C/11" of "L/3,2" or "L/3,4" - 3.2 or 3.4 calibers long) were soft but acted like hard caps at up to 30° obliquity too - against half-caliber KC plates, at least - so regardless of the design details, I am lumping them with the tough caps. Either Krupp used tough caps too, since Skoda and Krupp seemed to have shared a lot of projectile design information, as looking at their projectiles shows (same fillers designed the same way, etc.) or it is possible that the extra-strong high-temperature solder used by Krupp was gripping the cap so well that it acted like a tough cap (it did not peel off the projectile nose about half the time when hitting at over 15 degrees obliquity and all the time at over 20 degrees obliquity, as mentioned above), even though it was actually soft (to my knowledge, this was the only time this solder was used with a cap that was not hardened - post-WWI Krupp and WWII US Army APC projectiles using this kind of solder were all hard-capped), but I have no metallurgical test results on the Krupp WWI AP caps to nail this down. I do know that the previous design of 1907 Krupp APC projectiles ("C/07" of shorter lengths) also seem to have AP caps that were soldered on (no crimping of the lower cap onto notches/ridges in the nose are seen) and they acted only like regular soft caps, with the 20-degree obliquity maximum. Tough caps act like soft caps as to their ability to stop Primary Shatter when used against face-hardened armor, but they seem to be more rigid like a hard cap and do not pull off the nose as easily, allowing penetration without shatter over a larger obliquity range. Right now I am allowing them to work at any obliquity, but using my FACEHARD program's "AP Cap" variable set to "3" so they are acting like the thin hard caps used by some WWI & WWII SAPC projectiles with limited functionality ("-1" = Hood, "0" = Uncapped, "1" = Soft Cap, & "2" = Fully-functional Hard Cap). Unfortunately, I have no tests at over 30 degrees obliquity to absolutely confirm this. If I ever get more data, I will separate them from hard caps in the logic. Against homogeneous, ductile armor like STS, I am lumping them in with soft caps in my current program.
Hard caps. These are hardened in their face region to usually circa 500-600 Brinell hardness, usually with the lower region around the projectile nose softened to only about 250-300 Brinell (what I think Tough Caps were hardened to throughout their volume), in the (mistaken) belief that the cap should cushion the projectile nose even when its face is gouging out a pit in the armor plate. This face gouging ability means that the ability to soak up the sideways shockwave, preventing Primary Shatter, that is the primary function of soft caps, has no real meaning with hard caps. The pit in the plate face as the cap and face smash one-another and the cap is destroyed, means that the cap has acted like a center-punch so that the projectile's real nose now has a "socket" to insert itself, with the hardest portion of the face gone, the plate thinner, the nose prevented from ricocheting at an oblique impact, and any sideways shockwaves in the projectile nose formed by the impact now going back into the plate itself, so that the plate acts like its own AP cap replacement (shockwave "judo"!). Also, these caps work at all obliquities where they hit the plate prior to the projectile body; essentially at any obliquity whatsoever. Since these caps do not flatten out on the plate face, making the cap thicker digs a deeper pit and improves penetration by the projectile - this only works up to the point where the cap starts to dig into the soft back, for which it is a poor substitute for the real projectile nose. Also, the harder the cap and the thicker the hard portion is of a given size cap, the better against face-hardened armor. The first country to absolutely use hardened AP caps, to my knowledge, was France, who were using them in 1909 - see the book "Artillerie Navale: Les Canons-Les Projectiles" by Le Colonel L. Jacob (1909) that includes a detailed description explaining why the cap face (here just the conical tip face region above the projectile's nose under the cap) needed to be roughly as hard as the face of the hard plate that they hit (absolutely correct; harder, if possible) and that the lower region of the cap should be kept soft as a "cushion" to protect the nose (again, this last was found to be mistaken during WWII by the US Navy, to be described next).
Super-hard caps. These are an improved form of hard cap, unique to the US Navy at the end of WWII. The best AP caps ever made were used on the US Navy 6" Mk 35 MODs 9 and 10 AP projectiles and 8" Mk 21 MOD 5 AP projectile that were shaped just like the previous MOD's AP caps - 6" Mk 35 MOD 8 and 8" Mk 21 MOD 3 - but were hardened to 650-680 Brinell ALL THE WAY THROUGH, leaving only a narrow bottom edge region kept soft to allow the crimping into the ring of shallow pits in the lower nose used with all US Navy AP caps, on top of the soldering. These super-hard caps were made of a special molybdenum alloy steel ("triple alloy"). There was not even a suggestion of a protective cushion around the projectile nose - the best "cushion" turned out to be to destroy as much of the hard face of the plate as possible, which lowered the impact shock when the projectile nose finally reached the plate itself! These caps were very thick and could make a hole entirely through the deep (55% of total plate thickness) face layer used in WWII US Navy Thick Chill Class "A" armor (made by all three armor manufacturers almost identically). That this cap really helped was shown against a Japanese experimental 7.3" (18.5cm) VH plate tested by the US Navy after WWII. The usual VH plates taken from the (never used) SHINANO storage cache were found to be of somewhat lower quality compared to WWII US Navy or thick German KC n/A plates, being about 89% as resistive to stop the same shell at the same velocity basis compared to a similar US Class "A" plate if it had a 35% face thickness (US Class "A" armor = 1.00) - that is, a SHINANO plate would need to be 1/0.89 = 1.124 times as thick to stop the same shell at the same obliquity and striking velocity. This was still somewhat better than the pre-WWI British Vickers Cemented (VC) armor on which it was based, so the Japanese had improved the VC armor about as far as that recipe allowed in creating VH armor. This 7.3" plate, the thinnest VH plate ever found, was unusual in that it had a face layer that was thicker than 35%, being about 40% face, and which looked more like the face of a WWII Krupp KC n/A plate, with a nearly linear drop in hardness from the face surface to the back layer at 40%, but without the cemented surface layer used in all Krupp KC plates (perhaps it was made using some of Krupp's design specifications due to an exchange program during WWII). Otherwise, it was metallurgically identical to all other VH plates, which was close to the old British-recipe VC armor, but with a higher carbon content of about 0.5% (the Japanese were very good at quality control, even when the quality was not the best). When this unusual VH plate was tested with the 8" Mk 21 MOD 3 AP projectile (the projectile used during most of WWII by the newer US heavy cruisers) at 30° obliquity (a standard test set-up for testing US armor), the projectile only passed through the plate at a much elevated striking velocity compared to US armor of this thickness and was severely damaged in the process ("extruded" through the hole is the term they used in the test report). Though many projectiles suffer lots of damage when they fail to penetrate high-quality face-hardened armor and thus suffer maximum force from the plate over the longest time, this was the first and only plate by any manufacturer tested at the US Naval Proving Ground (US, British, or German) to do such extreme damage during a complete penetration to this high-quality super-heavy projectile. These 8" Mk 21 MOD 3 AP projectiles:
Had a tiny-filler (1.5%, almost a solid shot projectile) to reduce any weakness from this cause to an absolute minimum;
Were "sheath-hardened", which was the best AP projectile hardening method, unique to US Navy post-1930 AP and SAP-type Common projectiles (not sure about the WWI-era Midvale Unbreakable AP projectile designs in this regard), and were hardened like one-inside-the-other Russian dolls with the outer layer, especially the entire nose, being the hardest and the projectile getting gradually softer and tougher, with no sudden hardness drops as one moved inward from the sides and downward from the nose to rigidly support the projectile upper and middle body by a thick outer case extending far down toward the base against high sideways forces at oblique impact and still keep the entire center surrounding the explosive filler cavity and the lower portion of the body (up to about one-third of the distance from the base) tough and flexible when being twisted and slammed against the side of the hole as the armor plug is being punched out of the plate (Note: The hardness used for the tough center and base portions of these US Navy projectile bodies was exactly the same 269 Brinell that Krupp found to be best in its last L/4,4 WWII APC projectiles);
Had an extremely-heavy AP cap of 17% or 57 lb - nearly twice the weight and thickness of the AP cap on the latest WWII German 20,3cm Psgr.m.K. L/4,4 projectile - with a very thick hard face layer at 555 Brinell through most of its volume (only the region around the projectile nose was softened to circa 250-300 Brinell to allow crimping of the lower edge into the standard ring of pits in the projectile's lower nose used with all US Navy AP caps and to "cushion" the nose following WWI theory);
Had an extremely-blunt-oval nose well under 0.8-caliber in height - 0.5 is a hemisphere - and had no point, so that it was very strong with no weak points anywhere, no matter what the impact obliquity;
Was one of the new super-heavy AP projectile type similar to those used with the new ALASKA 12" and SOUTH DAKOTA and IOWA 16", being 335-lb, compared to 269 lb for the German shell above, which is typical of the usual 8" AP projectiles used in WWII (including the 260-lb US Navy 8" Mk 19 AP projectile used during WWII with the older heavy cruisers - the ones without any face-hardened armor), developed specifically to improve deck penetration at long range since the projectile lost less velocity due to air resistance when it was heavier with the same pointed nose shape with the windscreen and cap attached. Increasing the weight only had a rather small positive effect on face-hardened armor penetration, though, since shockwave formation to crack the hard face is mostly due to impact velocity, not impact mass.
When they substituted the late-WWII MOD 5 with its through-hardened super-hard cap (but was otherwise identical), the projectile now penetrated intact, showing the superiority of these new caps at pulverizing the hard face layer as they were destroyed by the impact, though the plate still was found to be the best face-hardened plate (by anyone) of its medium-thickness range ever tested by the US Navy (the second-best Class "A" plate in quality of that thickness range was also an experimental WWII non-cemented plate made by Carnegie, so these tests also verified that the thin cemented layer was a complete waste of time against high-quality AP projectiles, as the Japanese had determined and, uniquely after WWI, actually acted upon by eliminating it). In fact, the plate was so good that the test personnel, experts in all forms of armor, had to admit that they simply did not understand how it could be that good, implying that they really did not know what it was that made face-hardened armor work properly under impact. It is amazing that at the end of the ironclad era, the US Navy armor experts admitted in an official document that the main type of armor used in battleships and US Navy WWII cruisers was not really understood in a fundamental way!!
On the other hand, against STS or Class "B" homogeneous, ductile armor, these US Navy late-WWII cruiser-shell super-hard caps caused a 40% additional drop in the shell's penetration ability - average NBL in several tests of 6" Mk 35 MOD 9 AP vs 5-6" STS plates at 35° obliquity increased by about 27.5% instead of about 19.5% (projectile quality variations in this program is based on NBL ratios) - when the cap shattered on the plate (that is, a manual Shattered-Cap Effects entry of 1.4), showing that the shatter was either more thorough and/or it happened faster so that the cap contributed even less to the penetration process when it shattered than the softer caps of the same design did. When this super-hard cap did not shatter, it acted just like the softer caps of the same design, as would be expected. I would also assume that the STS plate minimum shatter thickness was less at any given obliquity for these projectiles compared to the MOD 3 design, which had a cap similar to, though thicker than, the 6" Mk 35 MOD 5 AP cap (#1) used as one of the standards in this program. Thus, what you gained against face-hardened plate, you lost against thick homogeneous plate when you increased the cap hardness. Since AP caps were initially made for use exclusively against face-hardened armor on naval AP projectiles, they were not optimized for use against homogeneous armor. Even those used for anti-tank guns in WWII were merely naval designs adopted to try to prevent projectile shatter when hitting armor at very high, near-muzzle-velocity values at point-blank range, not special caps optimized for use against homogeneous armor - many later steel anti-tank projectiles went back to using bare-nosed designs late in WWII because of their drawbacks under some impact conditions.
This program allows you to get some feel as to what happens when AP caps, Hoods, and/or windscreens are used to cover the noses of AP or SAP projectiles. When a good, universal nose-shape effects data-base is developed and also applied to this topic, my M79APCLC replacement of the original DeMarre Nickel-Steel Complete Penetration Formula, equivalent to a DeMarre Coefficient of 1.21 for WWII STS at right angles versus a medium-long (1.5-2 CRH) pointed ogival nose shape, will be complete for the typical steel AP projectile against average WWII US naval homogeneous, ductile armor, with and without AP caps, Hoods, and windscreens. Adding in projectile damage correctly would totally finish this project of mine. Unfortunately, I am not quite there yet.
The original use of the armor-piercing (AP) cap goes back to around 1880 when a wrought iron layer put on top of the face of a British-type Compound armor plate (through-hardened high-carbon mild steel plate bonded to the face of a thicker wrought iron plate, 25-33% steel; the lack of any alloying elements in the simple carbon steel then being made reduced the highest hardness possible to obtain without the plates breaking apart during manufacture to at most about 400 Brinell, usually less) was found to drastically reduce the breakage of the chilled cast iron projectiles then in use on the plate face surface and that putting a small mound of wrought iron on the tip of the projectile did the same thing. Before this could be acted upon, the introduction of tougher, more shatter-resistant, hardened steel projectiles a short time later also reduced the damage caused by this kind of armor, so it was thought that such add-ons as soft nose caps were no longer needed. However, during this time the French were slowly perfecting its high-strength soft solid mild steel armor (toughening it so that it did not break apart on impact, as it did during its first tests), introduced in 1876 when it won an Italian competition for 22" (55.9cm) armor for two Italian battleships against all other armor types - mostly solid wrought iron plates made by several companies, including some British companies. It was the loss of this competition to the hated French that caused the British to introduced Compound armor, since they found that their metallurgical skill was not up to making such thick solid steel plates that did not crumple under impact - by using the thick wrought iron back layer, even if the high-hardness, but very brittle face was destroyed by the projectile, the damage to the projectile, especially a chilled cast iron or early form of hardened steel projectile, in the process was usually enough to degrade its penetration ability through that back layer, resulting in a resistance roughly comparable to the French single-layer steel armor. The rivalry ended in 1890 when the French introduced nickel steel (3-7% nickel alloy) in place of regular mild steel, which was much tougher (crack resistant) and stopped even the best steel projectiles under conditions where the Compound plates had the face layer completely pulverized and the projectiles still made it through almost intact. The steel projectiles were now strong enough to resist shatter against the hardened face layer of the Compound plate, making the all-steel French plate stronger than the weak wrought iron back of the Compound plate. This seemed to agree with the previous conclusion that such things as soft nose caps were not really needed in high-quality steel AP projectiles. They were wrong.
While the other manufacturers were going through their various competitions with wrought iron, mild steel, Compound armor, and nickel-steel from 1859 through 1890 for warships, a German manufacturer Herr Grüson - who, in competition with the Englishman Mr. Palliser, developed the first chilled cast iron AP projectiles circa 1860, which were used extensively by both the Union and Confederacy during the US Civil War against each-other's ironclads - developed in 1868 and used by Russia and Germany initially beginning in the early 1870s, a new type of armor used only in unique rotating gun turrets in European border forts, including some coast defense installations: Chilled Cast Iron armor. This armor was considerably superior to any of the others until Harveyized (highly carburized or "cemented") Nickel-Steel and, later, Krupp Cemented (KC, usually with and but sometimes without the thin super-hard cemented surface layer as with Harveyized plate) chromium-nickel steel armors came into being in 1891 and 1894, respectively. Normally, brittle cast iron would not be considered for armor, as a projectile impact would cause it to shatter like glass. However, if the projectile could be made to shatter first, then the metal has some promise, since it is easy to make, can have its hardness and brittleness altered over a range of values due to its carbon content (not as well as steel, but good enough for some purposes, like this armor) and can - indeed must - be melted and poured into its final shape in one go using casting, without all the after-smelting forging, hammering, and/or rolling that wrought iron and steel require to be shaped. Wrought iron and, to a lesser extent, steels have such high melting points that using them in a molten state is very difficult and was rarely done for large masses of the metal after the initial smelting of the ore into the desired alloy and the pouring of the newly-purified metal into large slabs due to excessive cost in furnaces, so in the case of wrought iron especially, molten glass (silicon) was added to act as a "flux" to allow yellow-white-hot slabs of the metal to be separately made and then pounded/crushed together into one solid piece with no visible joints - such heating (to keep it hot during the entire manufacturing process and, for steel only, for final heat treatment to its final hardness and toughness internal contour) and mechanical working and plate-moving equipment at yellow-white temperatures requires expensive equipment in huge machines when making plates large enough for warship armor. All cast iron requires after it hardens is the grinding and polishing of the surface and grinding the edges for fit with other parts of the object being made and some means to haul the cold final plates to their destination for assembly, which is much less expensive, hence the popularity of cast iron over the ages, when it can be reliably used.
Hardening of steel and cast iron is possible due to the fact that when "ferrite", the room-temperature form of pure iron, has its temperature raised to over about 723-727° C (1269.4-1340.6° F) (this is typical, but it depends on the various alloys of iron and amount of carbon), its crystals dissolve and another iron crystal, "austenite", forms, without melting into a liquid. Ferrite has a body-centered cubic structure, with each "cell" in each separate crystal made up of eight iron atoms in a cube with another in the center. Only the center atom is unique for each cell, with the corner atoms being shared by all adjacent cells in a seemingly endless array of cubes going off in all directions. This form has virtually no room inside for any other atom, being a tight-pack array. Only the fact that there are flaws in the structure as it forms (twists, missing atoms, extra atoms, etc.) and the fact that between the various crystals, where each grew separately from a different "seed" and then slammed into one-another - forming irregular boundaries with some gaps and perhaps jagged edges - allowed other elements (for good and bad) to slip into the structure. Austenite, on the other hand, is a face-center cubic with the central iron atom moved to one of the faces, forming an X with the original four corner atoms on each face and the new one in the center of each face, requiring five more iron atoms per cube, though again all of each cube's faces were shared by adjacent cubes. Since a cube has six faces, this means that there are fewer cells than with ferrite and each cell is somewhat larger in size for a given weight of iron (less dense). The center of each austenite cell is now empty and several atoms of different kinds can fit inside, one per cell, most importantly carbon, so it soaks up a certain amount of these non-ferrous elements like a sponge. When the austenite is very slowly cooled back to ferrite ("annealing"), the re-forming ferrite cells expel these non-iron central atoms (with a couple of important exceptions, such as nickel) and push them into the boundaries between the ferrite crystals, where they change the properties of the iron by affecting how the crystals bond to one-another and move around. They can also now be chemically removed during the smelting process, if undesirable, as are the softening elements sulfur and phosphorus in armor steels. This is why silicon as a coating between the ferrite crystals works so well in wrought iron as a bonding flux and anti-corrosion agent, reducing the problem of corrosion in sea water to practically nothing for well-made wrought iron. The more of these non-iron elements there are, until all of the austenite cells are filled up, the longer it takes to push them out on cooling, so, since there is not an infinite time to cool the metal, some amount of these elements always get trapped in the ferrite crystal's interior, with the more atoms there are in the metal when smelted in liquid form, the more atoms get trapped for any given cooling rate. These trapped atoms also have varying effects on the iron, again with carbon being the most important here. Carbon can be soaked up by austenite to a maximum of 2.04% by weight at a single temperature of 1146° C (2094.8° F), where it now fills all of the available cells, and this is the maximum Carbon percentage for the metal steel. A Carbon weight percentage of under 0.08% is the bottom for steel in most definitions of steel, below which the effects of the carbon become undetectable for most uses and, if silicon is added, wrought iron is made, instead. Over 2.04% Carbon (and under 6.67% Carbon, the maximum amount that iron can chemically bond with under the most extreme conditions, forming pure "cementite" (see d, below)), with the typical value being around 4%, and you get the various forms of the widely-used metal cast iron, technically the crystal "ledeburite", along with some large super-hard cementite crystals (again, see d) always being formed on cooling along with the ferrite and where some carbon is always stuck between the ferrite and cementite iron crystals in the form of graphite (pure free carbon) and even more tiny cementite crystals, beyond the ability of austenite to soak up when formed at high temperatures, modifying the ferrite and cementite crystals' ability to bond to each-other properly and making the material brittle, since carbon does not strongly bond to other carbon atoms when they are not in the same crystal (within the same crystal, carbon can bond extremely strongly, hence diamond and graphene and carbon nanotubes) - the cast iron can crack and break in a jagged line between crystals under a sudden force. This is also a problem with cast steels (steels with carbon almost as high as cast iron in many cases to make them easier to melt), though post-hardening heat treatment ("tempering") can toughen this metal to some extent to reduce cracking, as can adding other alloying elements, especially those like nickel. Many variations of the ferrite/austenite heat treatment can be used, including using mechanical working at low temperatures to create local high-temperatures where the metal is being bent, compressed, or pulled the most, which can cause the temperature at those spots to cross the ferrite/austenite boundary and change the properties of the metal when it cools back down. For example, so-called "cold rolled" carbon steel is hardened during rolling to thinner gauges due to local heating occurring where the rollers are squeezing the metal, which then abruptly cools when the metal passes the roller, acting like a low-grade quenching process (see d, below). Steel is quite complex in its behavior during manufacture and this took many years to figure out (there is still unknown things to this day!).
When austenite with carbon dissolved in it is more quickly dropped in temperature to the ferrite level, the crystals that try to shape themselves back into ferrite do not have much time to push out the intruding carbon, so a lot of the carbon inside the austenite gets trapped inside of the iron, preventing ferrite from being formed at those points - the more carbon, the more of this interrupted-ferrite forms in the metal for a given higher cooling rate. This quick cooling is termed "quenching" when it involved dropping the hot austenitic iron into cold water or hot or cold oil to soak up the heat or spraying pressurized water against the place surface, or "chilling" when it is done in the original solidifying mold for cast iron and cast steels. For pure iron and wrought iron with nothing but silicon inside it (at least so little of any other impurities that it doesn't matter), this quenching or chilling does not do much, though it can cause the ferrite crystals that form to be in unusual shapes and sizes, with some effect on the final metal, but not a lot (hammering or rolling the iron will remove such irregularities, even at room temperature). With carbon (and some other alloying elements, like chromium), though, this trapped element does not just sit there as an inert lump. It is a chemically active element, as is iron, so as they are crushed together, they form chemical bonds inside the interrupted-ferrite crystals. With iron and carbon, the crystals that form are called "cementite", mentioned above under Harveyized, KC, and Chilled Cast Iron armors, which form pyramid-shaped crystals of one carbon atom and three iron atoms, with the adjacent cells sharing the iron much like in ferrite cubes (technically, it is an iron-based ceramic with 6.67% Carbon). In steel, the mixtures of cementite and ferrite can be complex, forming such things as "martensite", where each crystal is mostly ferrite/cementite mixed tightly together into a hard diamond-shaped crystal with some free ferrite in-between formed during quenching or chilling, or as "pearlite", where there is much more ferrite due usually to lower carbon content in the first place (many cells in the austenite were empty in the center) and/or slower cooling and the crystals formed are layers of alternating cementite/martensite and ferrite which are much tougher due to the soft ferrite cushioning the hard layers in a sandwich format with a potential for optimum hardness and toughness (that is, usable strength) if manufactured properly. A number of other crystals with various properties are also possible. The process called "tempering" to toughen steel (make it less brittle for a given hardness and crystal type) involved re-heating the quenched/chilled metal to some temperature below the austenite-forming temperature and leaving it there for a given time to "relax" places where the crystals were so distorted during the quick cooling of the initial hardening that they were "accidents waiting to happen" where cracks could start. Martensite, for example, is called fresh or "white" martensite prior to tempering and aged or "yellow" martensite after tempering - note that tempering can occur even at room temperature, though the cooler the metal, the longer it takes, with such things as Japanese swords with their extremely hard sharp martensitic edges being treasured when they got old, since they very gradually self-tempered at room temperature and were tougher (here less liable to break during a fight) than when they were first made (the old Japanese swordsmiths did not know about tempering). Tempering is an art, just like quenching/chilling, since doing it in a non-optimum manner can compromise the properties that you want in the final object being made. Knowledge in how to do these hardening and toughening processes and how to select the proper alloys to add to the steel, especially carbon, dearly won after a lot of trial and error over the centuries, is the reason that steel is so useful.
The higher the carbon content, the lower the melting point of iron, with cast iron being rather easily melted even with the primitive smelting and forging furnaces in used in ancient times. Since the furnaces were heated with burning wood (later also coal and coke, a special form of coal), there was a lot of carbon everywhere, most especially as many furnaces involved actually putting the iron object inside the furnace itself, touching the burning wood/coal. Cast iron was thus quickly discovered. Its brittleness precluded its use in most armor or any weapons that might hit armor, but for most other uses it was amazingly useful, especially in making complicated shapes in molds used in internal parts of various things - connectors/supports holding wood or rock objects together, for example - or in durable goods like pots and pans exposed to high heat when cooking (but not as high as when the metal was poured, of course!) where their high surface hardness made them long-lasting if not dropped from a height onto a hard rock or cement floor or otherwise broken (they were much stronger than clay-based ceramic items like containers and cups, though much more expensive). The regular form of cast iron, formed with slow cooling in the mold, is "grey" cast iron of moderate hardness and strength (roughly 260 Brinell hardness) and is brittle, but not excessively so in thick sections, so it is used the most of all forms of cast iron. It is a mixture of ledeburite (mostly), ferrite, cementite, and graphite, hence the color. Chilling in the mold will form "white" cast iron, the color of the cementite that is now the major crystal formed, which is very hard, up to 450 Brinell hardness, though not as hard as properly-quenched and tempered high-carbon alloy steel, which can be up to 500-550 Brinell in bulk, such as the hardened steel in KC armor just behind the thin cemented face layer and 600-700 Brinell in near-2% carbon thin cemented layer of both KC and Harveyized armor steels - the excess carbon in cast iron forms extra-large cementite crystals and create graphite and small cementite crystals in-between, all of which limit the rigidity of the final simple carbon-iron product somewhat compared to high-quality steel (with added alloys, cast iron, like steel, can have many other properties and is used in a number of modern uses in place of steel due to cheaper manufacturing). Re-heating white cast iron to just below the melting point and then slowly cooling it (a form of annealing) can form a somewhat tougher "malleable" cast iron that can flex somewhat and so be bent, though only a little bit (this does not help as much when grey cast iron is used as the original metal, though).
Chilling of cast iron by Herr Grüson was done by making part forming the face of the armor plate of the mold out of wrought iron with water behind it to soak up the heat of the poured cast iron very fast, while the rest of the mold is made up of a material that is more insulating, such as hard-packed sand, which causes much slower cooling. This forms white cast iron in the metal near the wrought iron mold regions, trailing off rather gradually to grey cast iron for the rest of the cast iron plate. The chilling process can be adjusted by limiting how much water is behind the wrought iron plate and how thick that plate is, since if the water all steams away this plate will then heat up to the temperature of the cast iron touching it and only more slowly cool down from then on. No matter what, there is a limit as to how deep into a plate the chilled region can go before it feathers into the unhardened portion. I assume that the thinner portions of Chilled Cast Iron armor, circa 6" (15.2cm) found at the edges of the plates in some of the smaller turrets, can have up to 33% hard white cast iron face depth (to the point inside the plate where you cannot see the white cast iron crystals under a microscope from a sample), with 25% being the maximum for the thickest armor of this kind, 33" (83.8cm, the thickest armor plates ever made, used in some German coast defense forts built in the 1870s), linearly changing from one to the other as the thickness changes. I get the former value since this is the thickness used in the later Herr Krupp's KC armor, which was based on the Grüson armor, while the very, very thick plates of the largest Chilled Cast Iron turrets was so thick that I cannot see how a deeper face could be formed by simple chilling, no matter what.
These turrets get their resistance not only due to the thick white cast iron face layer, but also from their shape. They are made of a ring of wedge-shaped cast iron plates soldered together at the edges (no bolts or rivets can be used in the brittle metal without compromising it as crack-forming points), where usually the thickest of the plate or plates, for one or two guns, respectively, had oval holes in their centers for the gun or side-by-side guns to fire through (the guns were short-barreled and did not protrude very far beyond the armor, much like US Civil War MONITOR-type gun turret weapons). Guns from 15cm (5.91") to over 30.5cm (12") were used in various-sized forts over the years, with various-sized turrets to match (some of these turrets were still in use at the beginning of WWII). The turrets were counter-sunk a short distance into the center of wide, flat rings of very thick concrete, called a "glacis", with only a narrow gap between the turret face and the edge of the glacis to allow the turret to rotate 360°. The turrets were balanced internally to compensate for the armor being thickest around the gun(s), which it was assumed would be facing the enemy. They used steam engines to move them, since they weighed far too much for any hand-powered system to be practical. The turrets were shaped like the turret of a Soviet T-54/55 tank, being mushroom-tops with a nearly flat top and tightly curved sides in the vertical plane (they also curved in the horizontal plane somewhat to meet flush the adjacent plates to either side with no corners or gaps, so they were rather complex 3D shapes in very carefully shaped, tight-tolerance molds of huge proportions). The plates were thickest where they were to be vertical just above the glacis and the thickness dropped smoothly to about half at the upper and lower edges. The lower edge was only a short distance below the glacis, allowing maximum space inside for the vertical supports and gun and projectile handling equipment projecting up from below, but the upper edge curved back for some length when seen from the side prior to fitting into the turret, smoothly thinning from the center to the upper edge. This meant that the upper edge was tilted backward at about 70° from the vertical and projected about 15% of the turret's diameter into the roof area, reducing the central roof hole to only about 70% of the turret diameter. The original design had more chilled cast iron plates of only about 3-4" (7.62-10.2cm) forming this almost-flat curved central roof, but these were shown in tests to be too brittle and weak when hit by steeply-falling mortar rounds, even the rather small, low-velocity mortars then in use, so the design ended up with wrought iron roofs of this thickness instead, which were proof against those projectiles (this weakness of face-hardened armor in thin sections compared to homogeneous, ductile plate when hit at high obliquity, requiring extra thickness and weight to compensate if the hard armor has to be used for other reasons, or against large-caliber projectiles wider than the plate thickness at any obliquity of impact - the mortar rounds here - is a generic problem with face-hardened armor, which is why it is not used for places where such hits are considered probable). The inability to put any holes or even deep notches into the cast iron plates (the gun ports were smoothly rounded and had beveled edges on the outside to eliminate corners) required that they be soldered together by pouring a liquid metal into the narrow gaps between the plates and then heating the plates near these side edges so that they tightly stuck together when cooled. Note also that the shape of the plates meant that projectile impacts would compress the plates together, making them mutually supporting to the plates to either side, an added plus to the design strength of these turrets. There was never any problem with plates separating after assembly. The wrought iron roofs were made of heavily-braced triangular plates (pie wedges) or, for the larger turrets, concentric rings of shorter plates that formed triangles when fitted together, riveted on their mutual edges and counter-sunk flush with and possibly soldered at the joint with the upper edge of the cast iron side armor ring - the connection to the side armor using solder seems somewhat "iffy" to me, given the flexibility of the wrought iron if hit, so I think that it would be much stronger if a ring of closely-spaced clamps were riveted to the wrought iron plate edge in contact with the cast iron edge and used to grip the edge of the side armor tightly all around. I do not know for certain how this boundary was held together. I do know that the manufacturing process Herr Grüson used created near-perfect, uniformly-made plates, both in shape and in internal crystal structure with rarely even small surface cracks or internal non-uniformities in the iron grain. As it turned out, even with cracks, the cast iron plates performed properly, as tests with a few rejected plates showed - they were just as strong as the perfect plates when tested by the guns then in use under repeated, overlapping hits. Only a direct hit on a gun port could seriously damage anything inside these turrets through the cast iron armor.
These turrets not only stopped projectiles, they, in the terminology used when they were first introduced, paralyzed the projectiles by shattering them into tiny pieces on impact - even hardened (chilled) solid-steel projectiles of the later portion of the time frame where these turrets were made (from the 1870s through the 1890s), were found to do nothing but crack and chip the surface, with only the occasional shallow chunk of hard face material being gouged out by repeated hit on a single spot, which still did not make a hole in the plate when hit again on the same spot. The virtually instantaneous fragmentation of the projectiles on the surface prevented them from applying any concentrated force on the impact point to force an opening in the plate material, here possible only by breaking open the armor all the way through all at once and throwing the armor in the projectile's way as a (broken up) solid armor plug pushed out through the plate back - other than a small amount chipped from the face surface and thrown sideways - using a sustained punching action, since the plate did not bend and the rigid material to the sides of the impact point prevented any sideways opening of the hole. These results made the naval armors then in use almost look like wood in comparison. Obviously, other manufacturers wanted desperately to find a way to match these results in naval armor, with lighter plates and with plates of more varied shapes, such a flat armor waterline belts. Many attempts failed during tests, some from ignorance of what was required and some simply from lack of the needed knowledge of metallurgy and the necessary skill to apply it, even when it was known.
The introduction by the French in 1890 of solid homogeneous, ductile nickel-steel armor (finally defeating for good British Compound armor), which was stronger than any previous steel armor and was so tough as to remain virtually crack-free even when penetrated entirely through after being hit several times previously nearby, changed the "ball game" and finally allowed an alternative to Chilled Cast Iron armor that was comparable or even superior in thinner sections and which could be formed into the shapes needed on a warship, particularly into flat plates with no loss in resistance. Nickel is very similar to iron in its weight and chemical properties, so similar that it will mix into iron and replace iron atoms with virtually no problem. However, the atoms do have a different size and weight and are slightly different chemically and mechanically from iron, so that when a crack starts to form and its tip moves through a crystal with nickel in it, the crack tip slams into the nickel and suddenly things are no longer symmetrical as with all-iron crystals; the nickel acts like a piece of cloth in a zipper and jams the crack tip. Thus, nickel mechanically increases the toughness of an iron alloy when used in proportions up through 3-4% without reducing the alloy's strength (nickel content of up to 7% was used in the first forms of nickel-steel, but it was later found that reducing the percentage to only 3-4% did not reduce the toughness and increased the strength of the alloy, since iron was somewhat stronger than nickel in steel - even small amounts of nickel had desirable effects on crack-resistant and such small amounts were, and still are, used in many steels developed for construction purposes, not armor). Nickel is not cheap, though, so reducing it to a minimum for any given requirement is cost-effective, too. In small amounts, copper and other metals can give nickel-like effects, but only nickel has the ability to be used in a large range of percentages without any bad side-effects on other properties of the metal alloy.
The first person "out of the gate" in the pursuit of a practical replacement for Chilled Cast Iron armor in both naval and land fortification purposes, using the new nickel-steel as a starting point, was a Mr. Harvey who had previously worked for Bethlehem Iron and Steel Corporation in the US. In 1890, just after it was first introduced, he obtained a 25cm (9.84") French nickel-steel plate and applied an old technique called variously "carburizing", "case hardening", or "cementing" (hence the name of the material produced, cementite) to the surface facing the enemy projectiles. What he did was take the plate, put it face-down onto a bed of finely-ground, tightly-packed, uniform coal so that its heavy weight pressed it tightly into the coal over its entire face surface, in a wrought-iron form-fitting box and rolled the box into an air-tight oven where it was heated to a temperature slightly above the austenite-forming temperature. He left it in the oven for a couple of weeks. During this period the transformation from nickel-steel pearlite to nickel-steel austenite was slow and the carbon in the coal gradually moved into the plate during the transformation and after as the compressed coal's free carbon atoms worked their way into the iron crystals due to the heat making the iron molecules jiggle and the loose carbon move very slowly past each the iron crystal boundaries as they opened and closed, as well as the change to austenite allowing much of the carbon to soak into the cells in that iron crystal type. The depth of this penetration varied with the time in the oven and the exact temperature, but it ranges from about 0.75" (19mm) to 1.5" (38mm), raising the carbon content in the center of this region to about 2%, falling off at the back of this layer in a smooth curve and being reduced at the very surface also after the plate is removed from the oven due to being burned out and turned into carbon dioxide by oxygen in the air and quenching water (which is why the oven had been air-tight). The still red-hot plate is then heated even more on the face side to a precise temperature to homogenize the layer, allow the austenite to absorb virtually all of the remaining free carbon, and remove any "kinks" in the structure due to irregular carbon patches, and then it was suddenly quenched cold using high-pressure water jets, which cause most of the steel in this layer to form cementite, with each cementite crystal thinly surrounded by a layer of ferrite and martensite. The rest of the plate was hardened somewhat, too, but as it was a low-carbon-content steel with lots of nickel, it remained tough enough to act as a shock-absorber without cracking too badly under heavy impacts (when the projectile face shattered, the time that the force was concentrated on the impact point was considerably reduced, so the main portion of the plate did not have to remain strong for very long if the cemented face layer performed its task properly). This face is extremely hard and, since its carbon content, while very high for a steel, was still low enough that virtually all of the carbon was able to soak into the austenite before quenching, it could harden to the maximum extent possible for steel, in the 600-700 Brinell range. The technique was called, sensibly enough, "Harveyizing" and plates made using it were "Harveyized" or "Harvey" plates, though as time went on the technical term cementing came back into use here. The thickness of this face was more-or-less fixed no matter how thick the plate itself was. For countries looking for a "cheap fix", Harveyizing was sometimes applied to mild steel, which was not as strong as nickel-steel but a far cry from the rather weak, brittle material it was before 1890 (metallurgy had improved in many ways, not just in how to use nickel in steel) - theoretically, you could Harveyize wrought iron, though nobody did such a thing. Later on, Krupp and some other manufacturers substituted pressurized jets of natural gas or methane (that is, "illuminating" or "lamp" gas - in those days prior to electricity being everywhere - and kitchen range/oven and house heating gas) into the sealed oven, so that the carbon of the gas would soak into the face of the armor, faster and more uniformly in their theory than with solid powdered coal. I do not see any real difference in which method of cementing armor was used, as to final results. Harveyizing or cementing was an expensive process with special ovens and weeks of time lost in the manufacture of the plates (time is money!), so some attempts were later made with forms of KC armor to remove the "C" part of that name and use only the Krupp "decremental" hardening process (see k, below), which made a deep face similar to Grüson Chilled Cast Iron armor, without the cemented layer - Japanese Vickers Hardened (VH) side armor on the YAMATO Class battleships was of this type and was the most successful form of this "non-cemented" face-hardened armor mass-produced (if you can call just two ships built as "mass produced"). [NOTE: The term "non-cemented" in British and Japanese practice ALWAYS means NOT FACE HARDENED, regardless of any other property, which is why the Japanese called their YAMATO armor VH, not VNC - they already had a homogeneous, ductile armor called New Vickers Non-Cemented (NVNC) made from the same steel alloy as VH and used in several warships from the early 1930s. US terminology allowed the literal meaning, so there was, for example, Midvale Non-Cemented (MNC - not to be confused with Japanese YAMATO-Class-only Molybdenum Non-Cemented (also MNC) homogeneous, ductile armor used for decks and turret roofs) Class "A" (face-hardened Krupp-steel) armor, face hardened, but not cemented, made during the period from 1906 to 1910 by that company. Somewhat confusing at times.] Cementing was the universal process used in most of these armors by anyone since Harvey first introduced it, even when alternatives, such as armors like Japanese VH, of equal quality were quite possible to make (tradition and inertia are hard to overcome, it seems). Conversely, in very thin plates with hard faces, Harveyized armor, using other alloys from plain nickel-steel, remains quite widely used, since there is no need to use any other method on top of it, due to it making a face layer of the desired thickness all by itself. Harveyized armor contrasts with Chilled Cast Iron armor in that it forms a very hard, but very thin (in thick plates used against cruiser and battleship guns), surface layer to destroy the projectile's nose and, hopefully, its entire body to its base on impact, while the cast iron armor forms a softer, but much thicker, layer to do the same thing. [AN ASIDE: Harvey started his own company to make his new armor and patented it, allowing others to make it under license (it was made as the main naval side armor for much of the 1890s and, in some countries, even in the early 1900s, since it took some time for the much more expensive KC armor - both in alloys required and special manufacturing equipment like special ovens that could heat only one side of a plate and the large quenching pits needed - to replace it, starting in Germany, of course). He had no problems that I know of with foreign manufacturers paying him royalties, but he had a big problem getting the US Navy to do so! He had to sue them and go all the way to the US Supreme Court before he was awarded his money, which the Court stated he deserved and that the US Navy was, in effect, a bunch of weasels (they used more formal language, of course, but their opinion was clear) for not paying him.]
While nickel was being introduced as a major armor toughening alloy, chromium had been under study and was then being applied to AP projectiles as a major hardening agent, allowing low-carbon steels to be hardened to much higher levels without trying to quench them so fast that they bent and cracked under the sudden temperature change. Chromium forms its own very hard carbides with the carbon in the steel, hardening the steel more efficiently using a given carbon content with a much less abrupt and/or a much slower quenching process. Its success in this purpose, making stronger AP projectiles without high carbon levels that increased brittleness, brought it to the attention of those who were trying to make steel as hard as or even harder than the Chilled Cast Iron armor that was their then-current goal, by quenching the plate. However, thick armor, even with a high carbon and nickel content, was difficult to quench without ruining the plate, so something was needed to allow the carbon content to remain rather low (under about 0.5%, preferably in the 0.2-0.4% range) to reduce brittle behavior during manufacture, but still allow a very high hardness deep into the plate when final quenching was done, as with Chilled Cast Iron armor, to try to replicate or beat the results of that cast iron armor using high-quality steel. As mentioned, chromium allowed a higher hardness with a slower quenching rate, so it could work with thicker plates, where the rate of cooling had to be slower if the plate was not to deform and crack. Just what the doctor ordered! The first to perfect this, just three years after Mr. Harvey introduced his cemented armor, was Herr Krupp, who owned the huge steel company that made many German guns and projectiles in Germany, both for Germany and foreign customers. He had been making Compound armor until French nickel-steel made that abruptly obsolete and thus he was in the market for a replacement, preferably superior to anyone else's armor so that he could "corner the market" on naval armor (and any other armor market that might come up later). He immediately licensed the Harveyizing process and studied how it worked. He studied how nickel and chromium worked in steels, both separately and together, since he was using both in his armor and projectile plants. He studied how to remove unwanted impurities in steels and how to even more carefully than before control temperatures throughout the manufacturing process of steel, including the final quenching process, where bad things could happen after so much work had been done. He then did something nobody else could do: He bought out Herr Grüson, lock, stock, and barrel, to get the metallurgical secrets the Herr Grüson had used to make his high-quality, extremely thick Chilled Cast Iron armor. He studied these too. In 1894, his company introduced the armor steel that, with variations, all subsequent steel armors and extra-high-strength construction materials for ships and land vehicles, both homogeneous and face-hardened, are patterned on: Chromium-Nickel Krupp steel, sometimes called "Krupp Soft" in its original form and used as homogeneous armor in many variations, and its face-hardened form, Krupp Cemented armor (KC, later called by Krupp, KC a/A for "KC Old Type", when post-WWI versions were made by Krupp called KC n/A for "KC New Type"). KC armor was also subject to many variations, depending on the theory that the manufacturer and the navy he was making it for had about what was the best way to make it (sometimes right and sometimes wrong). It incorporated (1) the cemented surface of the Harvey plate (using the illuminating gas method of carburizing), (2) a very clean (for the period) low-carbon steel using only about 0.3% carbon with special attention to removing sulfur and phosphorus and using silicon and manganese in small percentages to improve the hardenability of the steel and further reduce the effects on any remaining sulfur (these "silicon-manganese" steels, with no or only small amounts of nickel or chromium, were and are widely used in construction, naval and otherwise, eventually perfected to the point that they were almost as strong and tough as chromium-nickel steels in their highest grades, though never quite as crack-resistant under high impact loads as full-alloy chromium-nickel Krupp armor steels were), (3) nickel at around 3.5% as in the best nickel-steels, and (4), for the first time in armor steels, chromium at about 2% to allow thorough, though slower, deep quenching of the plate face and a higher overall hardness level (200 Brinell and up) at the plate back without sacrificing any toughness. One thing it did not incorporate was a post-quench temper, since Krupp thought that any reduction in hardness, even the small one that a good temper gave, would reduce the resistance of the armor more than the increase in toughness caused by the temper would improve it. In this he was wrong and most foreign versions added a post-quench temper, with small or even major increases in plate resistance because of this (though, in Krupp's defense, at the time KC a/A armor was introduced not much was known about the proper tempering methods for alloy steels and some tempering techniques caused just the problems Krupp was worrying about, though greater knowledge later showed how to correct this and get tougher armor through proper tempering). Due to a mistaken belief that his armor was the best, Krupp never changed his production method or armor "recipe" for KC a/A armor through the end of WWI, at which point he got a rude shock when British KC-type armor, later called merely Cemented Armor (CA), was tested alongside his and found to be noticeably superior, much of it due to improved tempering processes. The difference at the time was not great, but it showed that there was significant room for improvement in KC-type armors, which did indeed happen when such armors started to be made again in the mid-1930s after about 12 years of minimal production due to the Washington Naval Treaty's "Battleship Holiday". These later armors, including Krupp's own KC n/A as used in the battleships of the post-WWI SCHARNHORST and BISMARCK Classes, when made using the best metallurgical skill of the time, were considerably superior metallurgically to KC a/A and had much better resistance compared to similar old plates made with nominally the same face thickness and back hardness, though there were some properties that were still not known properly, such as the effects of face layer thickness as a percentage of total plate thickness had on scaling - that is, on the reduction in plate resistant of identical steel armors as they are made thicker to resist larger AP projectiles of otherwise identical design - leading to several different face thickness and back hardness designs, with sometimes negative effects on plate resistance, where only one design was really optimum for any plate thickness and enemy weapon size. To get some idea of how much improvement happened between KC a/A and the average plates of similar design (33-35% face layer) in WWII, in my FACEHARD program, to get the same protection as an average US Class "A" plate at the end of WWII would have been with only that face thickness (instead of the 55% actually used), a 10" US plate would have to be replaced by a 12.07" KC a/A plate; that is, the KC a/A plate is only 82.8% as good on an equal-weight basis as the modern KC plate, mostly due to better metallurgy and optimum post-hardening temper treatments. And some experimental plates showed that this could have been improved on noticeably, had face-hardened naval armor continued to be produced much after WWII, which it did not.
Krupp Cemented armor was much more expensive to make than Harveyized nickel-steel armor was. It required expensive chromium in rather large amounts, it required cleaner steel to prevent undesired alloying elements in the metal that could make the face layer less hard (and in later versions, less tough), and it required very tight tolerances as to temperature control when the face layer was being heated prior to the final quench and during the quench itself since, even though Krupp was trying to emulate Chilled Cast Iron armor, he could not afford to pour the low-carbon steel into a chilled mold to get the armor he wanted (the temperatures involved were so high as to be completely unobtainable), so he had to use post-pouring heat treatments and mechanical working (forging, hammering, and rolling) to get the properties he wanted and the shape of the plates he wanted, which also added to the cost considerably compared to Herr Grüson's simple cast armor and also added somewhat to the cost compared to Harvey armor. The major difference between the manufacture of Harvey armor and KC armor, ignoring the difference in the steel recipe followed in the original smelting to get the final percentages of various alloying elements in the metal (this was quite expensive, too, since purifying the liquid steel in the original furnace required successive additions of alloying elements and chemical purifying materials that were themselves removed later-on, in just the right order), was the very expensive heating and quenching system need for chromium-nickel steel, since these plates were much more sensitive to these processes than were any previous armors and required special handling. Once formed to the desired shape - the final shape plus some precise deformations that would be self-correcting as the heating and quenching process was performed - the KC plate was put into the cementing oven (using either the solid coal bed or gas jet method of carburizing) for the required time to get the thin cemented face layer desired (different manufacturers had different ideas about this, too, even when they thought it was still needed at all), though since the face was not yet quenched, the thin layer was not really cemented yet, just a high-carbon steel layer. When removed, it was inspected and then moved to the face heating oven. In that oven, the gas jets hitting the face were on fire, not just thick smoke as in the carburizing oven, and they heated the face up well above the austenite-forming temperature, while the plate back was kept at a much lower temperature to prevent it from hardening very much when later quenched. By observing small thermal materials placed along the plate edge, which melted at specific temperatures, the slow movement of the high-temperature face zone toward the plate back surface could be observed closely through small ports around the oven. When the face was the proper temperature for the proper distance into the plate (yellow-white hot, compared to merely orange-red hot for the back layer), usually just deep enough that a 33% (roughly) face layer formed - the average distance from the face surface to the point where no hardening of the plate back was evident later - in the final KC a/A armor plate, the plate was rapidly removed from the oven and moved to a special quenching pit. Here the huge plate was set up so that a very high pressure set of water jets would spray the plate's front and back surface, though the back spray was only used after the face had cooled down to near the same temperature as the back surface, now cooling the entire plate the rest of the way to room temperature evenly. This quench process hardened the cemented layer to its 600-700 Brinell value, then hardened the white-hot face behind the cemented layer to 490-550 Brinell, which then decreased in hardness toward the back until it reached the region that had been merely at the back layer's lower temperature, after which the plate was at 190-240 Brinell, depending on the manufacturer's theory as to how the back layer worked to strengthen the plate. KC a/A had the face stay at near the highest deep face Brinell hardness level possible to make reliably for about 15-20% of the plate thickness behind the face surface and then drop off in a "ski-slope", rapidly at first and then smoothly merging with the back layer hardness at roughly 33-35%. This change in hardness at the surface to the back hardness is called "decremental hardening" as it usually is not a sudden step, but a gradual process, though this is not always true with some manufacturer's plates. Note that the face percentage thickness and how suddenly the hardness values changed as one moved from the face surface back into the plate varied a lot in other armors made by other manufacturers and even by Krupp over the period between 1894 and 1955, when naval armor completely died as a separate project in the US Navy (the last hold-out, to my knowledge). There have been plates with circa-80% face thickness and other plates with only 20% face thickness, both made by the same manufacturer at different times (here The Midvale Company, with the former being pre-WWI MNC and the latter being Bethlehem Thin Chill (BTC) being made under license after 1921 for some warships, most of which were cancelled by the Washington Treaty in 1923). Most other armors have not changed the face layer so much, with a few, such as the Japanese VH manufacturers, keeping the face precisely at the 35% thickness of the British-recipe Vickers Cemented (VC) armor they obtained in 1912 with the battle-cruiser HMS KONGO, with almost no variation, a major feat of quality control exceeding almost anybody else who ever made KC-type naval armors, cemented or not. Spots where holes had to be drilled through the face for viewing ports or periscopes were covered with thick asbestos to keep them from getting hardened as much, since drilling the hard face without cracking it was very difficult.
After the final quench, the face was inspected again and any significant irregularities chipped off (deep cracks were not allowed, of course) using a jackhammer, though some plates had rather smooth faces (US WWII Class "A" armor plates had pebbly, uneven faces and smooth backs but Japanese VH had such a smooth face that makes it hard to determine which is the face and which is the back unless you know how the plate edges were supposed to fit the adjacent plates). Except for KC a/A (the only exception, to my knowledge), the plate was then tempered in a low-temperature furnace to toughen it and, if needed and possible, heated to allow it to be slightly bent in a huge press to fit the final shape for installation aboard ship (even KC a/A underwent that final shaping step, if needed).
This article is copyrighted 2012 by Nathan Okun and is reproduced on NavWeaps.com with permission.
- 15 April 2012