Design Notes for Fixing VAX Unaligned Access to IO and Register Space Problem Statement: VAX unaligned accesses are handled by reading the surrounding longword (or longwords) and a) for reads, extracting the addressed addressed word or longword b) for writes, inserting the addressed word or longword and then writing the surrounding longword (or longwords) back This is correct for all memory cases. On the 11/780, the unaligned access to register or IO space causes an error, as it should. On CVAX, it causes incorrect behavior, by either performing too many QBus references, or performing read-modify-writes instead of pure writes, or accessing the wrong Qbus locations. The problem cannot be trivially solved with address manipulation. The core issues is that on CVAX, unaligned access is done to exactly as many bytes as are required, using a base longword address and a byte mask. There are five cases, corresponding to word and longword lengths, and byte offsets 1, 2 (longword only), and 3. Further, behavior is different for reads and writes, because the Qbus always performs word operations on reads, leaving it to the processor to extract a byte if needed. Conceptual design: Changes in vax_mmu.c: Unaligned access is done with two separate physical addresses, pa and pa1, because if the access crosses a page boundary, pa1 may not be contiguous with pa. It's worth noting that in an unaligned access, the low part of the data begins at pa (complete with byte offset), but the high parts begins at pa1 & ~03 (always in the low-order end of the second longword). To handle unaligned data, we will add two routines for read and write unaligned: data = ReadU (pa, len); WriteU (pa, len, val); Note that the length can be 1, 2, or 3 bytes. For ReadU, data is return right-aligned and masked. For WriteU, val is expected to be right-aligned and masked. The read-unaligned flows are changed as follows: if (mapen && ((off + lnt) > VA_PAGSIZE)) { /* cross page? */ vpn = VA_GETVPN (va + lnt); /* vpn 2nd page */ tbi = VA_GETTBI (vpn); xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi]; /* access tlb */ if (((xpte.pte & acc) == 0) || (xpte.tag != vpn) || ((acc & TLB_WACC) && ((xpte.pte & TLB_M) == 0))) xpte = fill (va + lnt, lnt, acc, NULL); /* fill if needed */ pa1 = ((xpte.pte & TLB_PFN) | VA_GETOFF (va + 4)) & ~03; } else pa1 = ((pa + 4) & PAMASK) & ~03; /* not cross page */ bo = pa & 3; if (lnt >= L_LONG) { /* lw unaligned? */ sc = bo << 3; wl = ReadU (pa, L_LONG - bo); /* read both fragments */ wh = ReadU (pa1, bo); /* extract */ return ((wl | (wh << (32 - sc))) & LMASK); } else if (bo == 1) /* read within lw */ return ReadU (pa, L_WORD); else { wl = ReadU (pa, L_BYTE); /* word cross lw */ wh = ReadU (pa1, L_BYTE); /* read, extract */ return (wl | (wh << 8)); } These are not very different, but they do reflect that ReadU returns right-aligned and properly masked data, rather than the encapsulating longword. The write-unaligned flows change rather more drastically: if (mapen && ((off + lnt) > VA_PAGSIZE)) { vpn = VA_GETVPN (va + 4); tbi = VA_GETTBI (vpn); xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi]; /* access tlb */ if (((xpte.pte & acc) == 0) || (xpte.tag != vpn) || ((xpte.pte & TLB_M) == 0)) xpte = fill (va + lnt, lnt, acc, NULL); pa1 = ((xpte.pte & TLB_PFN) | VA_GETOFF (va + 4)) & ~03; } else pa1 = ((pa + 4) & PAMASK) & ~03; bo = pa & 3; if (lnt >= L_LONG) { sc = bo << 3; WriteU (pa, L_LONG - bo, val & insert[L_LONG - bo]); WriteU (pa, bo, (val >> (32 - sc)) & insert[bo]); } else if (bo == 1) /* read within lw */ WriteU (pa, L_WORD, val & WMASK); else { /* word cross lw */ WriteU (pa, L_BYTE, val & BMASK); WriteU (pa, L_BYTE, (val >> 8) & BMASK); } return; } Note that all the burden here has been thrown on the WriteU routine. ------------- ReadU is the simpler of the two routines that needs to be written. It will handle memory reads and defer register and IO space to model-specific unaligned handlers. int32 ReadU (uint32 pa, int32 lnt) { int32 dat; int32 sc = (pa & 3) << 3; if (ADDR_IS_MEM (pa)) dat = M[pa >> 2]; else { mchk = REF_V; if (ADDR_IS_IO (pa)) dat = ReadIOU (pa, lnt); else dat = ReadRegU (pa, lnt); } return ((dat >> sc) & insert[lnt]); } Note that the ReadIOU and ReadRegU return a "full longword," just like their aligned counterparts, and ReadU right-aligns the result, just as ReadB, ReadW, and ReadL do. WriteU must handle the memory read-modify-write sequence. However, it defers register and IO space to model-specific unaligned handlers. void WriteU (uint32 pa, int32 lnt, int32 val) { if (ADDR_IS_MEM (pa)) { int32 bo = pa & 3; int32 sc = bo << 3; M[pa >> 2] = (M[pa >> 2] & ~(insert[len] << sc) | (val << sc); } else if ADDR_IS_IO (pa) WriteIOU (pa, lnt, val); else WriteRegU (pa, lnt, val); return; } -------------- For the 11/780, ReadIOU, ReadRegU, WriteIOU, and WriteRegU all do the same thing: they throw an SBI machine check. We can write explicit routines to do this (and remove the unaligned checks from all the normal adapter flows), or leave things as they are and simply define the four routines as macros that go to the normal routines. So there's very little to do. On CVAX, I suspect that ReadRegU and WriteRegU behave like the normal routines. The CVAX specs don't say much, but CMCTL (the memory controller) notes that it ignores the byte mask and treats every access as an aligned longword access. I suspect this is true for the other CVAX support chips, but I no longer have chip specs. The Qbus, on the other hand... that's a fun one. Note that all of these cases are presented to the existing aligned IO routine: bo = 0, byte, word, or longword length bo = 2, word bo = 1, 2, 3, byte length All the other cases are going to end up at ReadIOU and WriteIOU, and they must turn the request into the exactly correct number of Qbus accesses AND NO MORE, because Qbus reads can have side-effects, and word read-modify-write is NOT the same as a byte write. The read cases are: bo = 0, byte or word - read one word bo = 1, byte - read one word bo = 2, byte or word - read one word bo = 3, byte - read one word bo = 0, triword - read two words bo = 1, word or triword - read two words ReadIOU is very similar to the existing ReadIO: int32 ReadIOU (uint32 pa, int32 lnt) { int32 iod; iod = ReadQb (pa); /* wd from Qbus */ if ((lnt + (pa & 1)) <= 2) /* byte or word & even */ iod = iod << ((pa & 2)? 16: 0); /* one op */ else iod = (ReadQb (pa + 2) << 16) | iod; /* two ops, get 2nd wd */ SET_IRQL; return iod; } The write cases are: bo = x, lnt = byte - write one byte bo = 0 or 2, lnt = word - write one word bo = 1, lnt = word - write two bytes bo = 0, lnt = triword - write word, byte bo = 1, lnt = triword - write byte, word WriteIOU is similar to the existing WriteIO: void WriteIO (uint32 pa, int32 val, int32 lnt) { switch (lnt) { case L_BYTE: /* byte */ WriteQb (pa, val & BMASK, WRITEB); break; case L_WORD: /* word */ if (pa & 1) { /* odd addr? */ WriteQb (pa, val & BMASK, WRITEB); WriteQb (pa + 1, (val >> 8) & BMASK, WRITEB); } else WriteQb (pa, val, WRITE); break; case 3: /* triword */ if (pa & 1) { /* odd addr? */ WriteQb (pa, val & BMASK, WRITEB); WriteQb (pa + 1, (val >> 8) & WMASK, WRITE); } else { WriteQb (pa, val & WMASK, WRITE); WriteQb (pa + 2, (val >> 16) & BMASK, WRITEB); } break; } SET_IRQL; return; } ----------------- I think this handles all the cases. /Bob Supnik Conflicts: VAX/vax780_defs.h VAX/vax_mmu.c VAX/vaxmod_defs.h
680 lines
21 KiB
C
680 lines
21 KiB
C
/* vax_mmu.c - VAX memory management
|
|
|
|
Copyright (c) 1998-2013, Robert M Supnik
|
|
|
|
Permission is hereby granted, free of charge, to any person obtaining a
|
|
copy of this software and associated documentation files (the "Software"),
|
|
to deal in the Software without restriction, including without limitation
|
|
the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
and/or sell copies of the Software, and to permit persons to whom the
|
|
Software is furnished to do so, subject to the following conditions:
|
|
|
|
The above copyright notice and this permission notice shall be included in
|
|
all copies or substantial portions of the Software.
|
|
|
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
ROBERT M SUPNIK BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
|
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
|
|
Except as contained in this notice, the name of Robert M Supnik shall not be
|
|
used in advertising or otherwise to promote the sale, use or other dealings
|
|
in this Software without prior written authorization from Robert M Supnik.
|
|
|
|
29-Nov-13 RMS Reworked unaligned flows
|
|
24-Oct-12 MB Added support for KA620 virtual addressing
|
|
21-Jul-08 RMS Removed inlining support
|
|
28-May-08 RMS Inlined physical memory routines
|
|
29-Apr-07 RMS Added address masking for system page table reads
|
|
22-Sep-05 RMS Fixed declarations (Sterling Garwood)
|
|
30-Sep-04 RMS Comment and formating changes
|
|
19-Sep-03 RMS Fixed upper/lower case linkage problems on VMS
|
|
01-Jun-03 RMS Fixed compilation problem with USE_ADDR64
|
|
|
|
This module contains the instruction simulators for
|
|
|
|
Read - read virtual
|
|
Write - write virtual
|
|
ReadL(P) - read aligned physical longword (physical context)
|
|
WriteL(P) - write aligned physical longword (physical context)
|
|
ReadB(W) - read aligned physical byte (word)
|
|
WriteB(W) - write aligned physical byte (word)
|
|
Test - test acccess
|
|
|
|
zap_tb - clear TB
|
|
zap_tb_ent - clear TB entry
|
|
chk_tb_ent - check TB entry
|
|
set_map_reg - set up working map registers
|
|
*/
|
|
|
|
#include "vax_defs.h"
|
|
#include <setjmp.h>
|
|
|
|
typedef struct {
|
|
int32 tag; /* tag */
|
|
int32 pte; /* pte */
|
|
} TLBENT;
|
|
|
|
extern uint32 *M;
|
|
extern int32 PSL;
|
|
extern int32 mapen;
|
|
extern int32 p1, p2;
|
|
extern int32 P0BR, P0LR;
|
|
extern int32 P1BR, P1LR;
|
|
extern int32 SBR, SLR;
|
|
extern int32 SISR;
|
|
extern jmp_buf save_env;
|
|
extern UNIT cpu_unit;
|
|
|
|
int32 d_p0br, d_p0lr; /* dynamic copies */
|
|
int32 d_p1br, d_p1lr; /* altered per ucode */
|
|
int32 d_sbr, d_slr;
|
|
extern int32 mchk_va, mchk_ref; /* for mcheck */
|
|
TLBENT stlb[VA_TBSIZE], ptlb[VA_TBSIZE];
|
|
static const int32 insert[4] = {
|
|
0x00000000, 0x000000FF, 0x0000FFFF, 0x00FFFFFF
|
|
};
|
|
static const int32 cvtacc[16] = { 0, 0,
|
|
TLB_ACCW (KERN)+TLB_ACCR (KERN),
|
|
TLB_ACCR (KERN),
|
|
TLB_ACCW (KERN)+TLB_ACCW (EXEC)+TLB_ACCW (SUPV)+TLB_ACCW (USER)+
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV)+TLB_ACCR (USER),
|
|
TLB_ACCW (KERN)+TLB_ACCW (EXEC)+TLB_ACCR (KERN)+TLB_ACCR (EXEC),
|
|
TLB_ACCW (KERN)+TLB_ACCR (KERN)+TLB_ACCR (EXEC),
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC),
|
|
TLB_ACCW (KERN)+TLB_ACCW (EXEC)+TLB_ACCW (SUPV)+
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV),
|
|
TLB_ACCW (KERN)+TLB_ACCW (EXEC)+
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV),
|
|
TLB_ACCW (KERN)+TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV),
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV),
|
|
TLB_ACCW (KERN)+TLB_ACCW (EXEC)+TLB_ACCW (SUPV)+
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV)+TLB_ACCR (USER),
|
|
TLB_ACCW (KERN)+TLB_ACCW (EXEC)+
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV)+TLB_ACCR (USER),
|
|
TLB_ACCW (KERN)+
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV)+TLB_ACCR (USER),
|
|
TLB_ACCR (KERN)+TLB_ACCR (EXEC)+TLB_ACCR (SUPV)+TLB_ACCR (USER)
|
|
};
|
|
|
|
t_stat tlb_ex (t_value *vptr, t_addr addr, UNIT *uptr, int32 sw);
|
|
t_stat tlb_dep (t_value val, t_addr addr, UNIT *uptr, int32 sw);
|
|
t_stat tlb_reset (DEVICE *dptr);
|
|
char *tlb_description (DEVICE *dptr);
|
|
|
|
TLBENT fill (uint32 va, int32 lnt, int32 acc, int32 *stat);
|
|
extern int32 ReadIO (uint32 pa, int32 lnt);
|
|
extern void WriteIO (uint32 pa, int32 val, int32 lnt);
|
|
extern int32 ReadReg (uint32 pa, int32 lnt);
|
|
extern void WriteReg (uint32 pa, int32 val, int32 lnt);
|
|
int32 ReadU (uint32 pa, int32 lnt);
|
|
void WriteU (uint32 pa, int32 val, int32 lnt);
|
|
|
|
/* TLB data structures
|
|
|
|
tlb_dev pager device descriptor
|
|
tlb_unit pager units
|
|
pager_reg pager register list
|
|
*/
|
|
|
|
UNIT tlb_unit[] = {
|
|
{ UDATA (NULL, UNIT_FIX, VA_TBSIZE * 2) },
|
|
{ UDATA (NULL, UNIT_FIX, VA_TBSIZE * 2) }
|
|
};
|
|
|
|
REG tlb_reg[] = {
|
|
{ NULL }
|
|
};
|
|
|
|
DEVICE tlb_dev = {
|
|
"TLB", tlb_unit, tlb_reg, NULL,
|
|
2, 16, VA_N_TBI * 2, 1, 16, 32,
|
|
&tlb_ex, &tlb_dep, &tlb_reset,
|
|
NULL, NULL, NULL, NULL, 0, 0, NULL, NULL, NULL, NULL, NULL, NULL,
|
|
&tlb_description
|
|
};
|
|
|
|
/* Read and write virtual
|
|
|
|
These routines logically fall into three phases:
|
|
|
|
1. Look up the virtual address in the translation buffer, calling
|
|
the fill routine on a tag mismatch or access mismatch (invalid
|
|
tlb entries have access = 0 and thus always mismatch). The
|
|
fill routine handles all errors. If the resulting physical
|
|
address is aligned, do an aligned physical read or write.
|
|
2. Test for unaligned across page boundaries. If cross page, look
|
|
up the physical address of the second page. If not cross page,
|
|
the second physical address is the same as the first.
|
|
3. Using the two physical addresses, do an unaligned read or
|
|
write, with three cases: unaligned long, unaligned word within
|
|
a longword, unaligned word crossing a longword boundary.
|
|
|
|
Note that these routines do not handle quad or octa references.
|
|
*/
|
|
|
|
/* Read virtual
|
|
|
|
Inputs:
|
|
va = virtual address
|
|
lnt = length code (BWL)
|
|
acc = access code (KESU)
|
|
Output:
|
|
returned data, right justified in 32b longword
|
|
*/
|
|
|
|
int32 Read (uint32 va, int32 lnt, int32 acc)
|
|
{
|
|
int32 vpn, off, tbi, pa;
|
|
int32 pa1, bo, sc, wl, wh;
|
|
TLBENT xpte;
|
|
|
|
mchk_va = va;
|
|
if (mapen) { /* mapping on? */
|
|
vpn = VA_GETVPN (va); /* get vpn, offset */
|
|
off = VA_GETOFF (va);
|
|
tbi = VA_GETTBI (vpn);
|
|
xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi]; /* access tlb */
|
|
if (((xpte.pte & acc) == 0) || (xpte.tag != vpn) ||
|
|
((acc & TLB_WACC) && ((xpte.pte & TLB_M) == 0)))
|
|
xpte = fill (va, lnt, acc, NULL); /* fill if needed */
|
|
pa = (xpte.pte & TLB_PFN) | off; /* get phys addr */
|
|
}
|
|
else {
|
|
pa = va & PAMASK;
|
|
off = 0;
|
|
}
|
|
if ((pa & (lnt - 1)) == 0) { /* aligned? */
|
|
if (lnt >= L_LONG) /* long, quad? */
|
|
return ReadL (pa);
|
|
if (lnt == L_WORD) /* word? */
|
|
return ReadW (pa);
|
|
return ReadB (pa); /* byte */
|
|
}
|
|
if (mapen && ((uint32)(off + lnt) > VA_PAGSIZE)) { /* cross page? */
|
|
vpn = VA_GETVPN (va + lnt); /* vpn 2nd page */
|
|
tbi = VA_GETTBI (vpn);
|
|
xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi]; /* access tlb */
|
|
if (((xpte.pte & acc) == 0) || (xpte.tag != vpn) ||
|
|
((acc & TLB_WACC) && ((xpte.pte & TLB_M) == 0)))
|
|
xpte = fill (va + lnt, lnt, acc, NULL); /* fill if needed */
|
|
pa1 = ((xpte.pte & TLB_PFN) | VA_GETOFF (va + 4)) & ~03;
|
|
}
|
|
else pa1 = ((pa + 4) & PAMASK) & ~03; /* not cross page */
|
|
bo = pa & 3;
|
|
if (lnt >= L_LONG) { /* lw unaligned? */
|
|
sc = bo << 3;
|
|
wl = ReadU (pa, L_LONG - bo); /* read both fragments */
|
|
wh = ReadU (pa1, bo); /* extract */
|
|
return ((wl | (wh << (32 - sc))) & LMASK);
|
|
}
|
|
else if (bo == 1) /* read within lw */
|
|
return ReadU (pa, L_WORD);
|
|
else {
|
|
wl = ReadU (pa, L_BYTE); /* word cross lw */
|
|
wh = ReadU (pa1, L_BYTE); /* read, extract */
|
|
return (wl | (wh << 8));
|
|
}
|
|
}
|
|
|
|
/* Write virtual
|
|
|
|
Inputs:
|
|
va = virtual address
|
|
val = data to be written, right justified in 32b lw
|
|
lnt = length code (BWL)
|
|
acc = access code (KESU)
|
|
Output:
|
|
none
|
|
*/
|
|
|
|
void Write (uint32 va, int32 val, int32 lnt, int32 acc)
|
|
{
|
|
int32 vpn, off, tbi, pa;
|
|
int32 pa1, bo, sc;
|
|
TLBENT xpte;
|
|
|
|
mchk_va = va;
|
|
if (mapen) {
|
|
vpn = VA_GETVPN (va);
|
|
off = VA_GETOFF (va);
|
|
tbi = VA_GETTBI (vpn);
|
|
xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi]; /* access tlb */
|
|
if (((xpte.pte & acc) == 0) || (xpte.tag != vpn) ||
|
|
((xpte.pte & TLB_M) == 0))
|
|
xpte = fill (va, lnt, acc, NULL);
|
|
pa = (xpte.pte & TLB_PFN) | off;
|
|
}
|
|
else {
|
|
pa = va & PAMASK;
|
|
off = 0;
|
|
}
|
|
if ((pa & (lnt - 1)) == 0) { /* aligned? */
|
|
if (lnt >= L_LONG) /* long, quad? */
|
|
WriteL (pa, val);
|
|
else if (lnt == L_WORD) /* word? */
|
|
WriteW (pa, val);
|
|
else WriteB (pa, val); /* byte */
|
|
return;
|
|
}
|
|
if (mapen && ((uint32)(off + lnt) > VA_PAGSIZE)) {
|
|
vpn = VA_GETVPN (va + 4);
|
|
tbi = VA_GETTBI (vpn);
|
|
xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi]; /* access tlb */
|
|
if (((xpte.pte & acc) == 0) || (xpte.tag != vpn) ||
|
|
((xpte.pte & TLB_M) == 0))
|
|
xpte = fill (va + lnt, lnt, acc, NULL);
|
|
pa1 = ((xpte.pte & TLB_PFN) | VA_GETOFF (va + 4)) & ~03;
|
|
}
|
|
else pa1 = ((pa + 4) & PAMASK) & ~03;
|
|
bo = pa & 3;
|
|
if (lnt >= L_LONG) {
|
|
sc = bo << 3;
|
|
WriteU (pa, val & insert[L_LONG - bo], L_LONG - bo);
|
|
WriteU (pa1, (val >> (32 - sc)) & insert[bo], bo);
|
|
}
|
|
else if (bo == 1) /* read within lw */
|
|
WriteU (pa, val & WMASK, L_WORD);
|
|
else { /* word cross lw */
|
|
WriteU (pa, val & BMASK, L_BYTE);
|
|
WriteU (pa1, (val >> 8) & BMASK, L_BYTE);
|
|
}
|
|
return;
|
|
}
|
|
|
|
/* Test access to a byte (VAX PROBEx) */
|
|
|
|
int32 Test (uint32 va, int32 acc, int32 *status)
|
|
{
|
|
int32 vpn, off, tbi;
|
|
TLBENT xpte;
|
|
|
|
*status = PR_OK; /* assume ok */
|
|
if (mapen) { /* mapping on? */
|
|
vpn = VA_GETVPN (va); /* get vpn, off */
|
|
off = VA_GETOFF (va);
|
|
tbi = VA_GETTBI (vpn);
|
|
xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi]; /* access tlb */
|
|
if ((xpte.pte & acc) && (xpte.tag == vpn)) /* TB hit, acc ok? */
|
|
return (xpte.pte & TLB_PFN) | off;
|
|
xpte = fill (va, L_BYTE, acc, status); /* fill TB */
|
|
if (*status == PR_OK)
|
|
return (xpte.pte & TLB_PFN) | off;
|
|
else return -1;
|
|
}
|
|
return va & PAMASK; /* ret phys addr */
|
|
}
|
|
|
|
/* Read aligned physical (in virtual context, unless indicated)
|
|
|
|
Inputs:
|
|
pa = physical address, naturally aligned
|
|
Output:
|
|
returned data, right justified in 32b longword
|
|
*/
|
|
|
|
SIM_INLINE int32 ReadB (uint32 pa)
|
|
{
|
|
int32 dat;
|
|
|
|
if (ADDR_IS_MEM (pa))
|
|
dat = M[pa >> 2];
|
|
else {
|
|
mchk_ref = REF_V;
|
|
if (ADDR_IS_IO (pa))
|
|
dat = ReadIO (pa, L_BYTE);
|
|
else dat = ReadReg (pa, L_BYTE);
|
|
}
|
|
return ((dat >> ((pa & 3) << 3)) & BMASK);
|
|
}
|
|
|
|
SIM_INLINE int32 ReadW (uint32 pa)
|
|
{
|
|
int32 dat;
|
|
|
|
if (ADDR_IS_MEM (pa))
|
|
dat = M[pa >> 2];
|
|
else {
|
|
mchk_ref = REF_V;
|
|
if (ADDR_IS_IO (pa))
|
|
dat = ReadIO (pa, L_WORD);
|
|
else dat = ReadReg (pa, L_WORD);
|
|
}
|
|
return ((dat >> ((pa & 2)? 16: 0)) & WMASK);
|
|
}
|
|
|
|
SIM_INLINE int32 ReadL (uint32 pa)
|
|
{
|
|
if (ADDR_IS_MEM (pa))
|
|
return M[pa >> 2];
|
|
mchk_ref = REF_V;
|
|
if (ADDR_IS_IO (pa))
|
|
return ReadIO (pa, L_LONG);
|
|
return ReadReg (pa, L_LONG);
|
|
}
|
|
|
|
SIM_INLINE int32 ReadLP (uint32 pa)
|
|
{
|
|
if (ADDR_IS_MEM (pa))
|
|
return M[pa >> 2];
|
|
mchk_va = pa;
|
|
mchk_ref = REF_P;
|
|
if (ADDR_IS_IO (pa))
|
|
return ReadIO (pa, L_LONG);
|
|
return ReadReg (pa, L_LONG);
|
|
}
|
|
|
|
/* Read unaligned physical (in virtual context)
|
|
|
|
Inputs:
|
|
pa = physical address
|
|
lnt = length in bytes (1, 2, or 3)
|
|
Output:
|
|
returned data
|
|
*/
|
|
|
|
int32 ReadU (uint32 pa, int32 lnt)
|
|
{
|
|
int32 dat;
|
|
int32 sc = (pa & 3) << 3;
|
|
if (ADDR_IS_MEM (pa))
|
|
dat = M[pa >> 2];
|
|
else {
|
|
mchk_ref = REF_V;
|
|
if (ADDR_IS_IO (pa))
|
|
dat = ReadIOU (pa, lnt);
|
|
else dat = ReadRegU (pa, lnt);
|
|
}
|
|
return ((dat >> sc) & insert[lnt]);
|
|
}
|
|
|
|
/* Write aligned physical (in virtual context, unless indicated)
|
|
|
|
Inputs:
|
|
pa = physical address, naturally aligned
|
|
val = data to be written, right justified in 32b longword
|
|
Output:
|
|
none
|
|
*/
|
|
|
|
SIM_INLINE void WriteB (uint32 pa, int32 val)
|
|
{
|
|
if (ADDR_IS_MEM (pa)) {
|
|
int32 id = pa >> 2;
|
|
int32 sc = (pa & 3) << 3;
|
|
int32 mask = 0xFF << sc;
|
|
M[id] = (M[id] & ~mask) | (val << sc);
|
|
}
|
|
else {
|
|
mchk_ref = REF_V;
|
|
if (ADDR_IS_IO (pa))
|
|
WriteIO (pa, val, L_BYTE);
|
|
else WriteReg (pa, val, L_BYTE);
|
|
}
|
|
return;
|
|
}
|
|
|
|
SIM_INLINE void WriteW (uint32 pa, int32 val)
|
|
{
|
|
if (ADDR_IS_MEM (pa)) {
|
|
int32 id = pa >> 2;
|
|
M[id] = (pa & 2)? (M[id] & 0xFFFF) | (val << 16):
|
|
(M[id] & ~0xFFFF) | val;
|
|
}
|
|
else {
|
|
mchk_ref = REF_V;
|
|
if (ADDR_IS_IO (pa))
|
|
WriteIO (pa, val, L_WORD);
|
|
else WriteReg (pa, val, L_WORD);
|
|
}
|
|
return;
|
|
}
|
|
|
|
SIM_INLINE void WriteL (uint32 pa, int32 val)
|
|
{
|
|
if (ADDR_IS_MEM (pa))
|
|
M[pa >> 2] = val;
|
|
else {
|
|
mchk_ref = REF_V;
|
|
if (ADDR_IS_IO (pa))
|
|
WriteIO (pa, val, L_LONG);
|
|
else WriteReg (pa, val, L_LONG);
|
|
}
|
|
return;
|
|
}
|
|
|
|
void WriteLP (uint32 pa, int32 val)
|
|
{
|
|
if (ADDR_IS_MEM (pa))
|
|
M[pa >> 2] = val;
|
|
else {
|
|
mchk_va = pa;
|
|
mchk_ref = REF_P;
|
|
if (ADDR_IS_IO (pa))
|
|
WriteIO (pa, val, L_LONG);
|
|
else WriteReg (pa, val, L_LONG);
|
|
}
|
|
return;
|
|
}
|
|
|
|
/* Write unaligned physical (in virtual context)
|
|
|
|
Inputs:
|
|
pa = physical address
|
|
val = data to be written, right justified in 32b longword
|
|
lnt = length (1, 2, or 3 bytes)
|
|
Output:
|
|
none
|
|
*/
|
|
|
|
void WriteU (uint32 pa, int32 val, int32 lnt)
|
|
{
|
|
if (ADDR_IS_MEM (pa)) {
|
|
int32 bo = pa & 3;
|
|
int32 sc = bo << 3;
|
|
M[pa >> 2] = (M[pa >> 2] & ~(insert[lnt] << sc)) | ((val & insert[lnt]) << sc);
|
|
}
|
|
else {
|
|
mchk_ref = REF_V;
|
|
if ADDR_IS_IO (pa)
|
|
WriteIOU (pa, val, lnt);
|
|
else WriteRegU (pa, val, lnt);
|
|
}
|
|
return;
|
|
}
|
|
|
|
|
|
/* TLB fill
|
|
|
|
This routine fills the TLB after a tag or access mismatch, or
|
|
on a write if pte<m> = 0. It fills the TLB and returns the
|
|
pte to the caller. On an error, it aborts directly to the
|
|
fault handler in the CPU.
|
|
|
|
If called from map (VAX PROBEx), the error status is returned
|
|
to the caller, and no fault occurs.
|
|
*/
|
|
|
|
#define MM_ERR(param) { \
|
|
if (stat) { \
|
|
*stat = param; \
|
|
return zero_pte; \
|
|
} \
|
|
p1 = MM_PARAM (acc & TLB_WACC, param); \
|
|
p2 = va; \
|
|
ABORT ((param & PR_TNV)? ABORT_TNV: ABORT_ACV); }
|
|
|
|
TLBENT fill (uint32 va, int32 lnt, int32 acc, int32 *stat)
|
|
{
|
|
int32 ptidx = (((uint32) va) >> 7) & ~03;
|
|
int32 tlbpte, ptead, pte, tbi, vpn;
|
|
static TLBENT zero_pte = { 0, 0 };
|
|
|
|
if (va & VA_S0) { /* system space? */
|
|
if (ptidx >= d_slr) /* system */
|
|
MM_ERR (PR_LNV);
|
|
ptead = (d_sbr + ptidx) & PAMASK;
|
|
}
|
|
else {
|
|
if (va & VA_P1) { /* P1? */
|
|
if (ptidx < d_p1lr)
|
|
MM_ERR (PR_LNV);
|
|
ptead = d_p1br + ptidx;
|
|
}
|
|
else { /* P0 */
|
|
if (ptidx >= d_p0lr)
|
|
MM_ERR (PR_LNV);
|
|
ptead = d_p0br + ptidx;
|
|
}
|
|
#if !defined (VAX_620)
|
|
if ((ptead & VA_S0) == 0)
|
|
ABORT (STOP_PPTE); /* ppte must be sys */
|
|
vpn = VA_GETVPN (ptead); /* get vpn, tbi */
|
|
tbi = VA_GETTBI (vpn);
|
|
if (stlb[tbi].tag != vpn) { /* in sys tlb? */
|
|
ptidx = ((uint32) ptead) >> 7; /* xlate like sys */
|
|
if (ptidx >= d_slr)
|
|
MM_ERR (PR_PLNV);
|
|
pte = ReadLP ((d_sbr + ptidx) & PAMASK); /* get system pte */
|
|
#if defined (VAX_780)
|
|
if ((pte & PTE_ACC) == 0) /* spte ACV? */
|
|
MM_ERR (PR_PACV);
|
|
#endif
|
|
if ((pte & PTE_V) == 0) /* spte TNV? */
|
|
MM_ERR (PR_PTNV);
|
|
stlb[tbi].tag = vpn; /* set stlb tag */
|
|
stlb[tbi].pte = cvtacc[PTE_GETACC (pte)] |
|
|
((pte << VA_N_OFF) & TLB_PFN); /* set stlb data */
|
|
}
|
|
ptead = (stlb[tbi].pte & TLB_PFN) | VA_GETOFF (ptead);
|
|
#endif
|
|
}
|
|
pte = ReadL (ptead); /* read pte */
|
|
tlbpte = cvtacc[PTE_GETACC (pte)] | /* cvt access */
|
|
((pte << VA_N_OFF) & TLB_PFN); /* set addr */
|
|
if ((tlbpte & acc) == 0) /* chk access */
|
|
MM_ERR (PR_ACV);
|
|
if ((pte & PTE_V) == 0) /* check valid */
|
|
MM_ERR (PR_TNV);
|
|
if (acc & TLB_WACC) { /* write? */
|
|
if ((pte & PTE_M) == 0)
|
|
WriteL (ptead, pte | PTE_M);
|
|
tlbpte = tlbpte | TLB_M; /* set M */
|
|
}
|
|
vpn = VA_GETVPN (va);
|
|
tbi = VA_GETTBI (vpn);
|
|
if ((va & VA_S0) == 0) { /* process space? */
|
|
ptlb[tbi].tag = vpn; /* store tlb ent */
|
|
ptlb[tbi].pte = tlbpte;
|
|
return ptlb[tbi];
|
|
}
|
|
stlb[tbi].tag = vpn; /* system space */
|
|
stlb[tbi].pte = tlbpte; /* store tlb ent */
|
|
return stlb[tbi];
|
|
}
|
|
|
|
/* Utility routines */
|
|
|
|
extern void set_map_reg (void)
|
|
{
|
|
d_p0br = P0BR & ~03;
|
|
d_p1br = (P1BR - 0x800000) & ~03; /* VA<30> >> 7 */
|
|
d_sbr = (SBR - 0x1000000) & ~03; /* VA<31> >> 7 */
|
|
d_p0lr = (P0LR << 2);
|
|
d_p1lr = (P1LR << 2) + 0x800000; /* VA<30> >> 7 */
|
|
d_slr = (SLR << 2) + 0x1000000; /* VA<31> >> 7 */
|
|
return;
|
|
}
|
|
|
|
/* Zap process (0) or whole (1) tb */
|
|
|
|
void zap_tb (int stb)
|
|
{
|
|
size_t i;
|
|
|
|
for (i = 0; i < VA_TBSIZE; i++) {
|
|
ptlb[i].tag = ptlb[i].pte = -1;
|
|
if (stb)
|
|
stlb[i].tag = stlb[i].pte = -1;
|
|
}
|
|
return;
|
|
}
|
|
|
|
/* Zap single tb entry corresponding to va */
|
|
|
|
void zap_tb_ent (uint32 va)
|
|
{
|
|
int32 tbi = VA_GETTBI (VA_GETVPN (va));
|
|
|
|
if (va & VA_S0)
|
|
stlb[tbi].tag = stlb[tbi].pte = -1;
|
|
else ptlb[tbi].tag = ptlb[tbi].pte = -1;
|
|
return;
|
|
}
|
|
|
|
/* Check for tlb entry corresponding to va */
|
|
|
|
t_bool chk_tb_ent (uint32 va)
|
|
{
|
|
int32 vpn = VA_GETVPN (va);
|
|
int32 tbi = VA_GETTBI (vpn);
|
|
TLBENT xpte;
|
|
|
|
xpte = (va & VA_S0)? stlb[tbi]: ptlb[tbi];
|
|
if (xpte.tag == vpn)
|
|
return TRUE;
|
|
return FALSE;
|
|
}
|
|
|
|
/* TLB examine */
|
|
|
|
t_stat tlb_ex (t_value *vptr, t_addr addr, UNIT *uptr, int32 sw)
|
|
{
|
|
int32 tlbn = uptr - tlb_unit;
|
|
uint32 idx = (uint32) addr >> 1;
|
|
|
|
if (idx >= VA_TBSIZE)
|
|
return SCPE_NXM;
|
|
if (addr & 1)
|
|
*vptr = ((uint32) (tlbn? stlb[idx].pte: ptlb[idx].pte));
|
|
else *vptr = ((uint32) (tlbn? stlb[idx].tag: ptlb[idx].tag));
|
|
return SCPE_OK;
|
|
}
|
|
|
|
/* TLB deposit */
|
|
|
|
t_stat tlb_dep (t_value val, t_addr addr, UNIT *uptr, int32 sw)
|
|
{
|
|
int32 tlbn = uptr - tlb_unit;
|
|
uint32 idx = (uint32) addr >> 1;
|
|
|
|
if (idx >= VA_TBSIZE)
|
|
return SCPE_NXM;
|
|
if (addr & 1) {
|
|
if (tlbn) stlb[idx].pte = (int32) val;
|
|
else ptlb[idx].pte = (int32) val;
|
|
}
|
|
else {
|
|
if (tlbn) stlb[idx].tag = (int32) val;
|
|
else ptlb[idx].tag = (int32) val;
|
|
}
|
|
return SCPE_OK;
|
|
}
|
|
|
|
/* TLB reset */
|
|
|
|
t_stat tlb_reset (DEVICE *dptr)
|
|
{
|
|
size_t i;
|
|
|
|
for (i = 0; i < VA_TBSIZE; i++)
|
|
stlb[i].tag = ptlb[i].tag = stlb[i].pte = ptlb[i].pte = -1;
|
|
return SCPE_OK;
|
|
}
|
|
|
|
char *tlb_description (DEVICE *dptr)
|
|
{
|
|
return "translation buffer";
|
|
}
|