diff --git a/.idea/material_theme_project_new.xml b/.idea/material_theme_project_new.xml new file mode 100644 index 0000000..af99699 --- /dev/null +++ b/.idea/material_theme_project_new.xml @@ -0,0 +1,12 @@ + + + + + + \ No newline at end of file diff --git a/demo.md b/demo.md new file mode 100644 index 0000000..0b2d874 --- /dev/null +++ b/demo.md @@ -0,0 +1,458 @@ +--- + +## **Chapter 1: Overview of the Compiler and its Structure** + +### **1. Role of Linker, Loader, and Preprocessor in Compilation** +**Question**: Explain the role of the linker, loader, and preprocessor in the process of compilation. *(Asked in Winter 2023)* + +#### **Answer**: + +| **Component** | **Role** | **Example** | +|------------------|-----------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------| +| **Linker** | Combines object files and libraries into an executable file. Resolves external symbols. | Combines `main.o` and `math.o` to form `main.exe`. | +| **Loader** | Loads the executable into memory for execution. Allocates memory and initializes. | Loads `main.exe` into RAM and sets the instruction pointer to the starting address. | +| **Preprocessor** | Processes directives like `#include`, `#define`, and macros. Produces intermediate code. | Converts `#include` into the actual library declarations for the compiler to use. | + +--- + +### **2. Phases of Compiler** +**Question**: Explain the phases of a compiler with an example. *(Asked in Winter 2023 and Summer 2023)* + +#### **Answer**: +The compiler operates in the following phases: + +| **Phase** | **Role** | **Example Input** | **Example Output** | +|----------------------------|------------------------------------------------------------------------------------------------------------|-------------------------|----------------------------------------| +| **Lexical Analysis** | Converts the source code into tokens. Each meaningful element of the code is identified as a token. | `int x = 5;` | Tokens: `int`, `id(x)`, `assign`, `num(5)`, `semicolon` | +| **Syntax Analysis** | Builds a parse tree based on grammar rules that define the structure of the source code. | Tokens: `int`, `id(x)`, `assign`, `num(5)` | Parse tree of the expression. | +| **Semantic Analysis** | Checks the meaning of the code, such as type checking, declaration of variables, etc. | Parse tree | Annotated parse tree with types | +| **Intermediate Code Gen.** | Produces an intermediate code that is independent of the target machine. | Parse tree | Three-address code like `t1 = 5` | +| **Code Optimization** | Makes the intermediate code more efficient by removing redundancies and improving performance. | `t1 = t2 + 5; t3 = t2 + 5;` | `t1 = t2 + 5; t3 = t1;` (Optimized) | +| **Code Generation** | Converts the optimized intermediate code into machine-specific code. | Optimized intermediate code | Assembly language code or binary code | + +#### **Example for Compilation of `a = b + 5;`** +| **Phase** | **Example Input** | **Example Output** | +|----------------------------|-----------------------|-----------------------------------| +| **Lexical Analysis** | `a = b + 5;` | Tokens: `id(a)`, `assign`, `id(b)`, `num(5)` | +| **Syntax Analysis** | Tokens | Parse tree for `a = b + 5` | +| **Semantic Analysis** | Parse tree | Checks if `a` and `b` are declared | +| **Intermediate Code Gen.** | Annotated tree | `t1 = b + 5; a = t1;` | +| **Code Optimization** | `t1 = b + 5; a = t1;`| Reuses computed values if needed | +| **Code Generation** | Optimized code | Assembly: `LOAD R1, b; ADD R1, 5; STORE a` | + +--- + +### **3. Difference Between Compiler and Interpreter** +**Question**: Differentiate between compiler and interpreter. *(Asked in Winter 2022)* + +#### **Answer**: +| **Aspect** | **Compiler** | **Interpreter** | +|-------------------|--------------------------------------------------------------------|-----------------------------------------------------------| +| **Translation** | Converts entire code into machine language before execution. | Converts code line by line during execution. | +| **Execution Speed** | Faster, as code is pre-compiled. | Slower, as translation happens at runtime. | +| **Error Detection**| Reports errors after complete analysis of the code. | Stops execution immediately when an error is encountered. | + +--- + +## **Chapter 2: Lexical Analysis** + +### **1. Role of Lexical Analyzer** +**Question**: Explain the role of lexical analyzer. *(Asked in Summer 2023)* + +#### **Answer**: +| **Aspect** | **Details** | +|--------------------|-------------------------------------------------------------------------------------------------| +| **Definition** | The lexical analyzer is the first phase of a compiler. It reads the source code and breaks it into tokens. | +| **Input** | Source code (e.g., `int a = b + 5;`). | +| **Output** | Sequence of tokens (e.g., `int`, `id(a)`, `assign`, `id(b)`, `plus`, `num(5)`). | +| **Responsibilities**| Tokenization, error detection for invalid characters, and skipping whitespace and comments. | + +--- + +### **2. Specification of Tokens** +**Question**: Define lexeme, token, and pattern. Identify tokens from the following program segment: + +```c +void change (int c, int d) { int m; m = c; c = d; d = m; } +``` + +#### **Answer**: +- **Lexeme**: The actual sequence of characters in the source code (e.g., `int`, `c`, `=`, etc.). +- **Token**: A class of lexemes. A token is a pair consisting of the name of the token and its value (e.g., ``, ``). +- **Pattern**: A rule or regular expression that describes a set of lexemes for a particular token. + +**Tokens in the Program**: +| **Lexeme** | **Token** | +|-------------|--------------------| +| `void` | `` | +| `change` | `` | +| `(` | `` | +| `int` | `` | +| `c` | `` | +| `,` | `` | +| `d` | `` | +| `)` | `` | +| `{` | `` | +| `m` | `` | +| `=` | `` | +| `;` | `` | + +--- + +### **3. Input Buffering** +**Question**: What is input buffering? Explain its importance. *(Asked in Winter 2022)* + +#### **Answer**: +- **Definition**: Input buffering is used to optimize the reading of source code. It reads characters in large blocks instead of one character at a time, reducing the overhead caused by frequent I/O operations. +- **Importance**: + - It increases efficiency by reducing the number of read operations. + - It allows the lexical analyzer to look ahead into the input stream more efficiently. + +**Techniques**: +1. **Two-Pointer Buffering**: Uses two pointers to track the position in the buffer, one for reading and one for lookahead. +2. **Circular Buffering**: Allows the buffer to loop around and reuse the space. + +--- + + +--- + +## **Chapter 3: Syntax Analysis** + +### **1. Top-Down vs Bottom-Up Parsing** +**Question**: Differentiate between top-down parsing and bottom-up parsing. *(Asked in Winter 2023)* + +#### **Answer**: +| **Aspect** | **Top-Down Parsing** | **Bottom-Up Parsing** | +|------------------------|------------------------------------------------------------|-----------------------------------------------------------| +| **Definition** | Builds the parse tree from the root to leaves. | Builds the parse tree from leaves to root. | +| **Example** | Recursive Descent Parsing. | LR Parsing (SLR, CLR, LALR). | +| **Grammar Type** | Works best with LL(1) grammars. | Works best with LR(1) grammars. | +| **Efficiency** | May struggle with left recursion. | Handles left recursion well. | +| **Example Grammar** | `S → aAb | b` | `S → Aa | bAc | bBa` | + +--- + +### **2. Left Recursion Elimination** +**Question**: What is left recursion? Eliminate left recursion for `A → Aα | β`. *(Asked in Winter 2022)* + +#### **Answer**: +- **Definition**: Left recursion occurs when a non-terminal on the left-hand side of a production is also the first symbol on the right-hand side. It can cause infinite recursion in recursive descent parsers. +- **Elimination**: + 1. Identify the left-recursive rule: `A → Aα | β`. + 2. Replace it with: + - `A → βA'` + - `A' → αA' | ε` + +**Example**: +Original: +``` +A → Aα | β +``` +Rewritten: +``` +A → βA' +A' → αA' | ε +``` + +--- + +### **3. Syntax Tree Construction** +**Question**: Draw the syntax tree for the expression `(a + b) * (c + d)`. *(Asked in Summer 2022)* + +#### **Answer**: +The syntax tree for the expression `(a + b) * (c + d)` is as follows: + +``` + * + / \ + + + + / \ / \ + a b c d +``` + +This tree shows that the multiplication (`*`) is the root, and its two operands are the sums `(a + b)` and `(c + d)`. + +--- + +## **Chapter 4: Error Recovery** + +### **1. Types of Errors in Compiler Design** +**Question**: Describe the different types of errors in compiler design. *(Asked in Summer 2023)* + +#### **Answer**: + +| **Error Type** | **Description** | **Example** | +|----------------------------|---------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------| +| **Lexical Errors** | Errors during tokenization (e.g., unrecognized characters). | Writing `int@ x = 5;` instead of `int x = 5;`. | +| **Syntax Errors** | Errors in grammar or structure of code. | Missing semicolon: `int x = 5`. | +| **Semantic Errors** | Errors in meaning or logic of the code. | Adding an integer and a string: `int x = "5" + 5;`. | +| **Runtime Errors** | Errors that occur while the program executes. | Division by zero: `x = 5 / 0;`. | +| **Logical Errors** | Errors where code runs but produces unintended output. | Using the wrong formula in a calculation. | + +--- + +### **2. Error Recovery Techniques** + +**Question**: Explain the error recovery techniques used in compilers. *(Asked in Winter 2023)* + +#### **Answer**: +| **Technique** | **Description** | **Example** | +|-----------------------------|---------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------| +| **Panic Mode Recovery** | Skips input symbols until a synchronizing token is found. | Skipping to the next semicolon when a syntax error occurs. | +| **Phrase Level Recovery** | Performs local corrections to the input. | Replacing a missing semicolon or parentheses. | +| **Error Productions** | Adds error rules to the grammar to handle specific cases. | Specifying rules for unbalanced parentheses. | +| **Global Correction** | Tries to find the closest legal program to the incorrect input using algorithms. | Automatically fixing mismatched braces in code. | + +- **Asked in Papers**: Winter 2022 (Q4b), Winter 2023 (Q4b). +- **Probability of Reappearance in Winter 2024**: **0.7**. + +--- + +### **3. Syntax Error Handling** +**Question**: What are the common methods for handling syntax errors during compilation? *(Asked in Summer 2022)* + +#### **Answer**: +- **Error Productions**: A way of handling expected errors by including special rules in the grammar that help the parser recover. +- **Panic Mode Recovery**: Skips tokens until a valid synchronization point (e.g., semicolon or closing parenthesis) is found. +- **Phrase-Level Recovery**: Corrects a local error by adding or deleting a symbol based on expected rules. + +--- + +--- + +## **Chapter 5: Intermediate Code Generation** + +### **1. Intermediate Code Representations** +**Question**: Explain the different types of intermediate code representations. *(Asked in Summer 2023)* + +#### **Answer**: + +| **Representation** | **Description** | **Example** | +|-----------------------------|---------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------| +| **Three-Address Code (TAC)** | Uses at most three addresses per instruction. | `t1 = a + b; t2 = t1 * c;` | +| **Quadruples** | Represents instructions as tuples of four elements: `(operator, arg1, arg2, result)`. | `(+, a, b, t1)` and `(*, t1, c, t2)`. | +| **Triples** | Uses only three elements, with implicit references to temporary variables. | `(+, a, b)` and `(*, t1, c)`. | +| **Indirect Triples** | Uses pointers to reference triples, making the code more flexible. | `(0) → (0) +, a, b` and `(1) → (1) +, c, d`. | + +--- + +### **2. Translating Expressions** + +**Question**: Translate the expression `-(a + b) * (c + d)` into TAC, Quadruples, Triples, and Indirect Triples. *(Asked in Winter 2023)* + +#### **Answer**: + +1. **Three-Address Code (TAC)**: + ``` + t1 = a + b + t2 = c + d + t3 = -t1 + t4 = t3 * t2 + ``` + +2. **Quadruples**: + ``` + (+, a, b, t1) + (+, c, d, t2) + (-, t1, -, t3) + (*, t3, t2, t4) + ``` + +3. **Triples**: + ``` + (0) +, a, b + (1) +, c, d + (2) -, (0), - + (3) *, (2), (1) + ``` + +4. **Indirect Triples**: + ``` + (0) → (0) +, a, b + (1) → (1) +, c, d + (2) → (2) -, (0), - + (3) → (3) *, (2), (1) + ``` + +--- + +### **3. Control Flow in Intermediate Code** + +**Question**: What is the role of control flow in intermediate code generation? Provide an example. *(Asked in Summer 2022)* + +#### **Answer**: +Control flow in intermediate code generation is used to represent the flow of execution through different paths in the program, such as conditionals and loops. It helps with code optimization and eventual machine code generation. + +- **Example**: + For the expression `if (a < b) { c = a; } else { c = b; }`, the intermediate code might be: + +``` +if a < b goto L1 +t1 = b +goto L2 +L1: t1 = a +L2: c = t1 +``` + +--- + +## **Chapter 6: Run-Time Environments** + +### **1. Stack vs Heap Allocation** + +**Question**: Compare stack and heap allocation in the context of memory management during compilation. *(Asked in Winter 2023)* + +#### **Answer**: + +| **Aspect** | **Stack Allocation** | **Heap Allocation** | +|-------------------|--------------------------------------------------------------------|--------------------------------------------------------------------| +| **Purpose** | Used for static memory allocation (local variables, function calls). | Used for dynamic memory allocation (objects, dynamic arrays). | +| **Access Speed** | Faster (LIFO structure). | Slower (needs garbage collection). | +| **Management** | Automatically managed by the runtime environment. | Managed manually by the programmer (e.g., `malloc`, `free`). | +| **Lifetime** | Exists until the function completes. | Exists until explicitly freed. | + +--- + +### **2. Memory Organization and Storage Allocation** + +**Question**: Explain the memory organization and storage allocation strategies used in a compiler. *(Asked in Summer 2022)* + +#### **Answer**: +- **Memory Organization**: + - The memory is organized into different regions based on the program’s requirements (e.g., stack, heap, data section, code section). + - The **stack** stores local variables and function call information, while the **heap** stores dynamically allocated memory. + +- **Storage Allocation Strategies**: + 1. **Static Allocation**: Allocates memory at compile-time. Suitable for fixed-size variables. + 2. **Dynamic Allocation**: Allocates memory at runtime. Uses functions like `malloc()` or `new`. + 3. **Automatic Allocation**: Managed by the compiler, usually for local variables that are automatically created and destroyed. + +--- + +### **3. Access to Nonlocal Data on the Stack** + +**Question**: Explain how access to nonlocal data is handled on the stack during execution. *(Asked in Winter 2022)* + +#### **Answer**: +- **Access to Nonlocal Data**: + - Nonlocal data refers to variables that are not directly accessible from the current function but may exist in other scopes (e.g., global variables or variables from parent functions). + - These are accessed using **static links** or **dynamic links**: + 1. **Static Link**: Keeps track of the outer function’s environment, allowing access to nonlocal variables. + 2. **Dynamic Link**: Points to the caller’s stack frame, enabling access to nonlocal variables dynamically. + +--- + + +--- + +## **Chapter 7: Code Generation and Optimization** + +### **1. Issues in the Design of Code Generators** + +**Question**: Discuss the issues in the design of a code generator. *(Asked in Summer 2023)* + +#### **Answer**: + +| **Issue** | **Description** | **Example** | +|----------------------------|---------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------| +| **Target Architecture** | The code generator must produce machine code for a specific architecture (e.g., x86, ARM). | Producing assembly code for ARM vs. x86 requires different instructions. | +| **Optimization** | Code generators should aim to produce efficient code in terms of execution time and memory usage. | Replacing `mul` with `shift` for multiplying by powers of 2. | +| **Instruction Selection** | Choosing the appropriate machine instructions to perform operations. | For a simple addition, choosing between `ADD` and `MOV` depending on the processor. | +| **Register Allocation** | Deciding how to use available registers efficiently. | Storing temporary results in registers to avoid memory access delays. | +| **Control Flow Management** | Managing jumps and branches, especially in the presence of loops and conditionals. | Properly handling conditional jumps like `if-else` statements. | + +--- + +### **2. Code Optimization Techniques** + +**Question**: Explain different code optimization techniques used in a compiler. *(Asked in Winter 2023)* + +#### **Answer**: + +| **Optimization Technique** | **Description** | **Example** | +|------------------------------------|-----------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------| +| **Constant Folding** | Evaluates constant expressions at compile time to simplify the code. | `x = 2 + 3;` → `x = 5;`. | +| **Dead Code Elimination** | Removes code that does not affect the final outcome of the program. | Removing `if (false) { ... }` or unreachable code. | +| **Common Subexpression Elimination** | Identifies and reuses expressions that are computed more than once in the program. | `t1 = a + b; t2 = a + b;` → `t2 = t1;`. | +| **Loop Optimization** | Improves the performance of loops by eliminating redundant calculations or using efficient data access patterns. | Changing `for (i = 0; i < n; i++)` to `for (i = n-1; i >= 0; i--)`. | +| **Strength Reduction** | Replaces expensive operations with less costly alternatives. | `x = y * 2;` → `x = y << 1;`. | + +--- + +### **3. Register Allocation and Instruction Scheduling** + +**Question**: What is register allocation and instruction scheduling in code generation? *(Asked in Summer 2023)* + +#### **Answer**: +- **Register Allocation**: The process of assigning variables to a limited number of processor registers to reduce the number of memory accesses. Efficient allocation can significantly improve execution speed. + + **Example**: + Given the expression `a = b + c;`, the code generator may allocate registers to store `b` and `c` and use them for the computation. + +- **Instruction Scheduling**: This involves reordering instructions to improve performance without changing the result. It can help minimize pipeline stalls in pipelined processors by rearranging independent instructions to execute in parallel. + + **Example**: + Instead of `t1 = a + b; t2 = t1 * c;`, scheduling could be done to compute `t2 = a * c;` in parallel with `b + c`. + +--- + +## **Chapter 8: Instruction-Level Parallelism** + +### **1. Code Scheduling Constraints** + +**Question**: Discuss the constraints in instruction-level parallelism and code scheduling. *(Asked in Winter 2023)* + +#### **Answer**: + +| **Constraint** | **Description** | **Example** | +|-------------------------|-----------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------| +| **Data Dependence** | Ensures that instructions which depend on each other (e.g., data dependencies) are executed in the correct order. | If `t1 = a + b;` and `t2 = t1 * c;`, `t2` depends on `t1`. | +| **Control Dependence** | The flow of control in the program must be considered (e.g., branches and loops). | Ensuring that `t1 = a + b;` is executed before an `if (condition)`. | +| **Resource Conflicts** | Prevents multiple instructions from using the same hardware resource at the same time (e.g., registers, ALUs). | Two instructions trying to use the same register or functional unit at the same time. | +| **Pipeline Stalls** | Ensures that instructions do not cause pipeline hazards, such as data or control hazards. | Rearranging instructions to avoid waiting for the result of a prior instruction. | + +--- + +### **2. Basic Block and Flow Graph** + +**Question**: Explain the concept of basic blocks and flow graphs in the context of compiler design. *(Asked in Summer 2023)* + +#### **Answer**: +- **Basic Block**: A basic block is a sequence of instructions with no branches in except at the entry point and no branches out except at the exit point. It represents a straight-line code sequence with no jumps or branches in the middle. + + **Example**: + ``` + x = 5; + y = 6; + z = x + y; + ``` + +- **Flow Graph**: A flow graph is a graphical representation of the control flow in a program, where nodes represent basic blocks and edges represent the control flow between them. + + **Example**: + For the following code: + ``` + if (x > y) { + z = x + y; + } else { + z = x - y; + } + ``` + The corresponding flow graph will show two possible paths, one for the `if` branch and one for the `else`. + +--- + +### **3. Code Scheduling and Optimization** + +**Question**: What is the importance of code scheduling and optimization in modern compilers? *(Asked in Winter 2022)* + +#### **Answer**: +- **Code Scheduling**: Code scheduling improves the instruction execution order to maximize CPU usage by ensuring instructions that can run in parallel are scheduled accordingly. It also reduces pipeline stalls and improves throughput. + +- **Code Optimization**: Optimization techniques aim to reduce resource usage, improve execution speed, and minimize code size. Common optimizations include constant folding, loop unrolling, and eliminating redundant calculations. + + **Example**: + For a loop that repeatedly computes `t1 = a + b;`, optimization could be done by computing it once before the loop starts if the values of `a` and `b` don't change inside the loop. + +--- diff --git a/package.json b/package.json index ba099a2..7c9e282 100644 --- a/package.json +++ b/package.json @@ -37,7 +37,9 @@ "stream-browserify": "^3.0.0", "stream-http": "^3.2.0", "url": "^0.11.3", - "web-vitals": "^2.1.4" + "web-vitals": "^2.1.4", + "core-js": "^3.33.5", + "regenerator-runtime": "^0.14.0" }, "scripts": { "start": "react-scripts start", @@ -58,9 +60,10 @@ "not op_mini all" ], "development": [ - "last 1 chrome version", - "last 1 firefox version", - "last 1 safari version" + "last 5 chrome versions", + "last 5 firefox versions", + "last 5 safari versions", + "last 5 edge versions" ] }, "devDependencies": { @@ -68,6 +71,8 @@ "@reduxjs/toolkit": "^2.2.3", "react-router-dom": "^6.26.2", "tailwind-scrollbar-hide": "^1.1.7", - "tailwindcss": "^3.4.13" + "tailwindcss": "^3.4.13", + "postcss": "^8.4.24", + "autoprefixer": "^10.4.13" } } diff --git a/src/components/Login.jsx b/src/components/Login.jsx index 5a8d971..1ba3696 100644 --- a/src/components/Login.jsx +++ b/src/components/Login.jsx @@ -46,7 +46,6 @@ const Login = () => { }); }).catch((error) => { setErrorMessage(error.message); - console.log(error); }); })