three mapping strategies of Direct Mapped Cache, Fully Associate Cache and set Associate Cache are learned in the textbook, but in these three mapping strategies, the length of the main memory address and the mapped binary information is the same.
my confusion goes like this:
this mapping strategy is used. CPU gives the address An of the main memory it wants to access, but before accessing it, it translates this address into address B corresponding to the cache through the mapping policy. Check to see if valid data is loaded in B. If so, read the data directly, that is, hit. If not, miss, go to the main memory.
consider that main memory can hold 2 ^ 14 words, so A should have 14 bit, which is easy to understand, but B and An are equal in length, and B also has 14 bit, doesn"t that mean that main memory is as large as cache? In that case, what else do I need your main memory for? why don"t you just cache it all?
and it is even more inexplicable if address B is not the address of the storage unit in the cache, but the content stored in the storage unit in the cache. My CPU has already given the address of the storage unit to be accessed (although it is in main memory) from the beginning, and then you ask me to make a detour to the cache and give me an address. I finally have to access the main memory, so what am I doing in this detour?
what on earth am I thinking wrong? Thank you