AI Study Plan Generator
Run ID: 69bc9d0805a86f9557abae0c2026-03-29Education
PantheraHive BOS
BOS Dashboard

Create a personalized study plan with flashcards and quizzes

Personalized 4-Week Beginner Machine Learning Study Plan

This comprehensive study plan is designed to guide you through the fundamentals of Machine Learning over four weeks, tailored for a beginner level. It includes core concepts, recommended resources, practical exercises, and specific topics for flashcards and quizzes to reinforce your learning.


Overall Study Goal

To build a foundational understanding of Machine Learning principles, including supervised and unsupervised learning, model evaluation, and practical implementation using Python libraries. By the end of this plan, you will be able to understand basic ML algorithms, apply them to simple datasets, and interpret their results.

Prerequisites & Setup

Before you begin, ensure you have:

  • Basic Math Skills: Familiarity with algebra, basic calculus (derivatives are helpful but not strictly necessary for basic understanding), and statistics (mean, median, standard deviation).
  • Computer Access: A reliable computer with internet access.
  • Python Environment:

* Install Anaconda Distribution: This includes Python, Jupyter Notebook, and essential libraries like NumPy, Pandas, Matplotlib, and Scikit-learn.

* Alternatively, use Google Colab: A free cloud-based Jupyter Notebook environment that requires no setup.


Weekly Breakdown

Week 1: Introduction to ML & Python Fundamentals for Data Science

Learning Objectives:

  • Understand what Machine Learning is, its types, and common applications.
  • Review/learn Python basics relevant for data science.
  • Get familiar with NumPy and Pandas for data manipulation.
  • Visualize data using Matplotlib/Seaborn.

Core Concepts:

  • What is Machine Learning? (Supervised, Unsupervised, Reinforcement Learning)
  • Key ML Terminology (Features, Labels, Training Data, Test Data, Model)
  • Python Basics: Variables, Data Types, Control Flow (if/else, loops), Functions
  • NumPy: Arrays, Array Operations, Slicing
  • Pandas: DataFrames, Series, Data Loading (CSV), Basic Data Cleaning (handling missing values), Filtering, Grouping
  • Data Visualization: Histograms, Scatter Plots, Line Plots (using Matplotlib/Seaborn)

Recommended Resources:

  • Online Courses: "Python for Everybody" (Coursera), "Introduction to Python for Data Science" (DataCamp/edX).
  • Books/Tutorials: "Python Crash Course" (for Python basics), Official NumPy/Pandas documentation, Towards Data Science articles on Medium.
  • Videos: YouTube tutorials on Python, NumPy, Pandas, Matplotlib.

Practical Exercises:

  1. Python Practice: Write functions for basic math operations (e.g., factorial, prime checker).
  2. NumPy Exercises: Create a 3x3 identity matrix, perform element-wise multiplication on two arrays.
  3. Pandas Data Loading & Exploration:

* Find a simple CSV dataset (e.g., Iris dataset, a small sales dataset) from Kaggle or UCI Machine Learning Repository.

* Load it into a Pandas DataFrame.

* Use .head(), .info(), .describe(), .isnull().sum().

* Filter data based on a condition (e.g., all rows where 'age' > 30).

  1. Data Visualization: Create a histogram for a numerical column and a scatter plot between two numerical columns from your loaded dataset.

Flashcard Topics:

  • Define Supervised Learning.
  • Define Unsupervised Learning.
  • What is a DataFrame?
  • Difference between NumPy array and Python list.
  • Purpose of df.describe().
  • Common types of data visualizations.

Quiz Topics:

  • Multiple choice on ML types and applications.
  • Identify correct Pandas/NumPy syntax for basic operations.
  • Interpret a simple scatter plot.
  • Questions on data types and basic Python control flow.

Week 2: Supervised Learning - Regression

Learning Objectives:

  • Understand the concept of regression and its applications.
  • Learn about Simple Linear Regression and Multiple Linear Regression.
  • Implement Linear Regression using Scikit-learn.
  • Evaluate regression models using common metrics.

Core Concepts:

  • Regression: Predicting continuous values.
  • Simple Linear Regression: Equation (y = mx + b), slope, intercept.
  • Cost Function (Mean Squared Error - MSE): How to measure model error.
  • Gradient Descent (Conceptual): How models learn.
  • Multiple Linear Regression: Extending to multiple features.
  • Model Training & Testing: Splitting data.
  • Evaluation Metrics: MSE, Root Mean Squared Error (RMSE), R-squared.
  • Scikit-learn Basics: train_test_split, LinearRegression, fit(), predict().

Recommended Resources:

  • Online Courses: "Machine Learning" by Andrew Ng (Coursera - focus on linear regression parts), "Introduction to Machine Learning with Python" (O'Reilly/various platforms).
  • Books/Tutorials: "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" (Chapter 2-4), Scikit-learn official documentation (Linear Models).
  • Videos: StatQuest with Josh Starmer (Linear Regression, R-squared, MSE).

Practical Exercises:

  1. Implement Simple Linear Regression:

* Use a simple dataset (e.g., boston_housing dataset from sklearn.datasets or a generated dataset).

* Split data into training and testing sets (train_test_split).

* Initialize and train a LinearRegression model.

* Make predictions on the test set.

* Calculate MSE, RMSE, and R-squared.

* Plot actual vs. predicted values.

  1. Explore Multiple Linear Regression: Apply the same steps as above but using multiple features from the dataset.
  2. Feature Scaling (Conceptual): Understand why it's important for some models (though not strictly necessary for basic Linear Regression).

Flashcard Topics:

  • What is the goal of a regression model?
  • Formula for Simple Linear Regression.
  • What does MSE measure?
  • What is R-squared?
  • Purpose of train_test_split().
  • What is an 'intercept' in linear regression?

Quiz Topics:

  • Calculate MSE given actual and predicted values.
  • Identify scenarios where regression is appropriate.
  • Interpret coefficients of a linear regression model.
  • Questions on model overfitting/underfitting (basic concept).

Week 3: Supervised Learning - Classification

Learning Objectives:

  • Understand the concept of classification and its applications.
  • Learn about Logistic Regression and Decision Trees.
  • Implement these models using Scikit-learn.
  • Evaluate classification models using appropriate metrics.

Core Concepts:

  • Classification: Predicting categorical values (binary or multi-class).
  • Logistic Regression: Not just for regression! A classification algorithm.

* Sigmoid function, probability estimation, decision boundary.

  • Decision Trees: Tree-based model, decision nodes, leaf nodes, splitting criteria (Gini impurity, Entropy).
  • Model Evaluation Metrics:

* Accuracy, Precision, Recall, F1-Score.

* Confusion Matrix.

  • Overfitting & Underfitting: Introduction to these concepts and how they relate to model complexity.

Recommended Resources:

  • Online Courses: "Machine Learning" by Andrew Ng (Coursera - focus on logistic regression), "Applied Machine Learning in Python" (Coursera).
  • Books/Tutorials: "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" (Chapter 3, 6), Scikit-learn official documentation (Logistic Regression, Decision Trees).
  • Videos: StatQuest with Josh Starmer (Logistic Regression, Decision Trees, Confusion Matrix).

Practical Exercises:

  1. Implement Logistic Regression:

* Use a classification dataset (e.g., Iris, Breast Cancer Wisconsin dataset from sklearn.datasets).

* Split data, train a LogisticRegression model.

* Make predictions and evaluate using accuracy, precision, recall, and F1-score.

* Generate and interpret a Confusion Matrix.

  1. Implement Decision Tree Classifier:

* Repeat the above steps using a DecisionTreeClassifier.

* Experiment with max_depth parameter to see its effect on performance.

* (Optional) Visualize the decision tree (requires graphviz).

Flashcard Topics:

  • What is the goal of a classification model?
  • Difference between Logistic Regression and Linear Regression.
  • Define Accuracy, Precision, Recall.
  • What is a Confusion Matrix?
  • What is Gini impurity in a Decision Tree?
  • What is overfitting?

Quiz Topics:

  • Interpret a Confusion Matrix to calculate metrics.
  • Identify appropriate classification algorithms for given scenarios.
  • Questions on the trade-off between precision and recall.
  • Basic understanding of how a Decision Tree makes decisions.

Week 4: Model Evaluation & Introduction to Unsupervised Learning

Learning Objectives:

  • Deepen understanding of model evaluation techniques.
  • Learn about cross-validation.
  • Introduce Unsupervised Learning with K-Means Clustering.
  • Understand the concept of dimensionality reduction (PCA).

Core Concepts:

  • Cross-Validation: K-fold cross-validation, advantages over simple train/test split.
  • Hyperparameter Tuning (Conceptual): What are hyperparameters, basic ideas like Grid Search.
  • Unsupervised Learning: No labels, finding patterns.
  • Clustering: Grouping similar data points.
  • K-Means Clustering: Centroids, iterative assignment and update steps, elbow method for K.
  • Dimensionality Reduction: Reducing the number of features.
  • Principal Component Analysis (PCA): Finding principal components, reducing data dimensions (conceptual understanding).

Recommended Resources:

  • Online Courses: "Unsupervised Learning in Python" (DataCamp), "Machine Learning" by Andrew Ng (Coursera - clustering section).
  • Books/Tutorials: "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" (Chapter 8, 9), Scikit-learn official documentation (Clustering, PCA).
  • Videos: StatQuest with Josh Starmer (K-Means Clustering, PCA, Cross-Validation).

Practical Exercises:

  1. Implement K-Fold Cross-Validation:

* Apply K-fold cross-validation to a previously built regression or classification model.

* Compare the average performance with the single train/test split performance.

  1. Implement K-Means Clustering:

* Use an unlabeled dataset or remove labels from a classification dataset (e.g., Iris, Wine dataset from sklearn.datasets).

* Apply KMeans to cluster the data.

* Visualize the clusters (e.g., using a scatter plot with different colors for clusters).

* (Optional) Use the elbow method to try and determine an optimal 'K'.

  1. Explore PCA (Conceptual & Basic Application):

* Apply PCA to a dataset with many features (e.g., digits dataset from sklearn.datasets).

* Reduce dimensions to 2 or 3 components.

Visualize the reduced data. (Focus on understanding why* you'd use PCA).

Flashcard Topics:

  • What is cross-validation and why is it used?
  • Difference between supervised and unsupervised learning.
  • What is the goal of clustering?
  • How does K-Means clustering work (basic steps)?
  • What is dimensionality reduction?
  • What is a hyperparameter?

Quiz Topics:

  • Identify scenarios where clustering is useful.
  • Questions on the benefits of cross-validation.
  • Basic steps of the K-Means algorithm.
  • Conceptual questions on PCA and its purpose.

General Study Tips

  • Active Learning: Don't just read; code along, experiment, and try to explain concepts in your own words.
  • Consistency: Dedicate specific time slots each day or week for studying.
  • Practice, Practice, Practice: The more you code and apply algorithms, the better you'll understand them.
  • Take Notes: Summarize key concepts and formulas.
  • Join Communities: Engage with online forums (Stack Overflow, Reddit's r/MachineLearning) or study groups.
  • Don't Be Afraid to Struggle: Machine learning can be challenging. Persistence is key.
  • Prioritize Understanding over Memorization: Focus on why algorithms work, not just how to use their functions.

Next Steps (Beyond 4 Weeks)

Upon completing this beginner plan, consider delving into:

  • More Advanced Algorithms: Support Vector Machines (SVMs), Ensemble Methods (Random Forests, Gradient Boosting).
  • Deep Learning Fundamentals: Introduction to Neural Networks, TensorFlow/Keras.
  • Feature Engineering: Techniques for creating better features from raw data.
  • Specialized Domains: Natural Language Processing (NLP), Computer Vision (CV).
  • Real-world Projects: Work on end-to-end projects on platforms like Kaggle to apply your skills.
Step 2: aistudygenius

Workflow Execution: AI Study Plan Generator (Flashcards Generation)

Category: Education

Description: Create a personalized study plan with flashcards and quizzes

User Inputs:

  • Topic: Machine Learning
  • Level: Beginner
  • Duration: 4 weeks

Flashcards Generated

This section provides a set of flashcards tailored to a beginner-level, 4-week Machine Learning study plan. These flashcards cover key concepts, definitions, and distinctions crucial for understanding the fundamentals of Machine Learning. They are organized by the presumed weekly progression of a beginner's curriculum.

How to Use These Flashcards:

  1. Active Recall: Read the "Front" side, try to recall the answer, then check the "Back" side.
  2. Spaced Repetition: Review flashcards at increasing intervals (e.g., daily, every few days, weekly) to improve long-term retention.
  3. Self-Quizzing: Use these flashcards to quiz yourself or a study partner.
  4. Supplement Study: Integrate these flashcards with your primary learning materials (lectures, textbooks, tutorials).

Week 1: Introduction to Machine Learning & Linear Regression

Flashcard 1

  • Front: What is Machine Learning?
  • Back: A field of artificial intelligence that enables systems to learn from data, identify patterns, and make decisions or predictions with minimal human intervention.

Flashcard 2

  • Front: Differentiate between Supervised, Unsupervised, and Reinforcement Learning.
  • Back:

* Supervised: Learns from labeled data (input-output pairs) to predict future outcomes.

* Unsupervised: Learns from unlabeled data to find hidden patterns or structures.

* Reinforcement: Learns through trial and error by interacting with an environment and receiving rewards/penalties.

Flashcard 3

  • Front: What are "Features" and "Labels" in the context of Machine Learning?
  • Back:

* Features: The input variables (attributes) used to make predictions.

* Labels: The output variable or target that we are trying to predict.

Flashcard 4

  • Front: What is the primary goal of Linear Regression?
  • Back: To model the relationship between a dependent variable (label) and one or more independent variables (features) by fitting a linear equation to the observed data, primarily for prediction.

Flashcard 5

  • Front: What is a "Cost Function" in Machine Learning, and what is its role in Linear Regression?
  • Back: A function that quantifies the error or "cost" of a model's predictions compared to the actual values. In Linear Regression, the Mean Squared Error (MSE) is commonly used to measure the average squared difference between predicted and actual values, guiding the model to find the best-fit line.

Week 2: Model Evaluation & Logistic Regression

Flashcard 6

  • Front: Explain the concepts of "Overfitting" and "Underfitting."
  • Back:

* Overfitting: When a model learns the training data too well, capturing noise and specific details, leading to poor performance on new, unseen data.

* Underfitting: When a model is too simple to capture the underlying patterns in the training data, resulting in high error on both training and test data.

Flashcard 7

  • Front: What is "Bias" and "Variance" in the context of model performance?
  • Back:

* Bias: The error introduced by approximating a real-world problem with a simplified model. High bias leads to underfitting.

* Variance: The amount that the estimate of the target function will change if different training data were used. High variance leads to overfitting.

Flashcard 8

  • Front: What is "Gradient Descent" and why is it used?
  • Back: An iterative optimization algorithm used to find the minimum of a function (typically a cost function). It works by taking steps proportional to the negative of the gradient of the function at the current point, moving towards the steepest descent.

Flashcard 9

  • Front: What is the primary purpose of Logistic Regression?
  • Back: To perform binary classification by estimating the probability that an instance belongs to a particular class. It uses a sigmoid function to map predictions to a probability between 0 and 1.

Flashcard 10

  • Front: What is the "Sigmoid Function" and its role in Logistic Regression?
  • Back: A non-linear activation function that maps any real-valued number to a value between 0 and 1. In Logistic Regression, it transforms the linear output into a probability, making it suitable for classification.

Week 3: Introduction to Classification Algorithms

Flashcard 11

  • Front: How does a Decision Tree make predictions?
  • Back: By partitioning the data into subsets based on features, creating a tree-like structure of decisions (nodes) and outcomes (leaves). Each internal node represents a test on an attribute, each branch represents an outcome of the test, and each leaf node represents a class label.

Flashcard 12

  • Front: What is the core idea behind a Support Vector Machine (SVM)?
  • Back: To find an optimal hyperplane that best separates data points of different classes in a high-dimensional space, maximizing the margin between the closest data points (support vectors) of each class.

Flashcard 13

  • Front: How does the K-Nearest Neighbors (KNN) algorithm classify a new data point?
  • Back: It classifies a new data point based on the majority class of its 'K' nearest neighbors in the feature space. It's a non-parametric, lazy learning algorithm that doesn't build a model during training.

Flashcard 14

  • Front: What is a common "distance metric" used in KNN?
  • Back: Euclidean distance is the most common, but Manhattan distance or Minkowski distance can also be used, depending on the data characteristics.

Week 4: Introduction to Neural Networks & Unsupervised Learning

Flashcard 15

  • Front: What is the basic building block of an Artificial Neural Network (ANN)?
  • Back: A "neuron" or "perceptron," which receives inputs, performs a weighted sum, adds a bias, and passes the result through an activation function to produce an output.

Flashcard 16

  • Front: What is the purpose of an "Activation Function" in a neural network?
  • Back: To introduce non-linearity into the network, allowing it to learn complex patterns and relationships in the data that a purely linear model could not capture. Examples include Sigmoid, ReLU, Tanh.

Flashcard 17

  • Front: What is the primary goal of Unsupervised Learning?
  • Back: To discover hidden patterns, structures, or relationships within unlabeled data without prior knowledge of output variables.

Flashcard 18

  • Front: Explain the basic idea behind K-Means Clustering.
  • Back: An unsupervised learning algorithm that partitions 'n' data points into 'k' clusters. It iteratively assigns each data point to the cluster whose centroid is nearest, and then recalculates the centroids based on the new cluster assignments, aiming to minimize the within-cluster sum of squares.

Flashcard 19

  • Front: What is a "Centroid" in K-Means Clustering?
  • Back: The center point of a cluster, calculated as the mean (average) of all data points belonging to that cluster. It represents the cluster's "average" position in the feature space.

This comprehensive set of flashcards will serve as a valuable tool for reinforcing key concepts and definitions throughout your 4-week beginner Machine Learning study journey.

ai_study_plan_generator.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}