Akira AI
  • Platform
    • MLOps Platform

      Managing Model Development Lifecycle of ML in Production
    • AI at the Edge

      Enabling faster data processing and Decision making with AI at the Edge
    • AI Appliance - Multi-Cloud

      Productionizing AIOps over Multi Cloud Environment
    • Fast Data Platform

      Building Scalable, distributed systems for streaming workloads
  • AI Solutions
  • AI Marketplace
  • Integrations
  • Pricing
  • Sign In
  • Get Started
  • Akira AI Glossary

A

  • Adam Optimization
  • Apache Hadoop
  • Apache Spark
  • Automated Machine Learning
  • Average Pooling

B

  • Back Propagation Algorithm
  • Bayesian Statistics
  • Bias Errors
  • Bidirectional Encoder Representations from Transformers
  • Bootstrap Aggregation
  • Business Analytics
  • Business Intelligence

C

  • Capsule Networks
  • Concave and Convex Function
  • Concordant-Discordant Ratio
  • Convolution
  • Cross-Validation

D

  • Data Mining
  • Decision-Driven Data Analytics
  • Deep Neural Networks
  • Deep Reinforcement Learning
  • Dopamine
  • Dplyr
  • Dropout Regularization

E

  • ELI5
  • ETL
  • Edge Analytics
  • Edge computing in agriculture
  • Embeddings from Language Models
  • End to End Machine Learning
  • Evaluation Metrics
  • Explainable Artificial Intelligence
  • Exploratory Data Analysis

F

  • Factor Analysis
  • Feature Hashing
  • Few Shot Learning

G

  • Gated Recurrent Unit
  • Generative Adversarial Networks
  • Goodness of Fit
  • Google Duplex
  • Gradient Boosting
  • Gradient Descent

H

  • Hybrid Learning Models
  • Hyperparameter Tuning

L

  • Lean and Augmented Data Learning
  • Lightgbm

M

  • Meta-Learning

P

  • Probabilistic Programming
  • PyText

Q

  • Quantum Computing
  • Quantum Machine Learning

S

  • Stochastic Gradient Descent

T

  • Technological Convergence

U

  • Universal Language Model Fine-Tuning (ULMfit)

V

  • Vowpal Wabbit
  • Back to Glossary Page

Stochastic Gradient Descent

Stochastic Gradient Descent

What is Stochastic Gradient Descent?

The weights of a neural network cannot be calculated using an analytical method. Instead, the weights must be discovered via an empirical optimization procedure called stochastic gradient descent.

Share

World-class articles, delivered weekly.


See Akira AI in action

We transform large organizations around the world by translating cutting-edge AI research into customizable, scalable and human-centric AI products.

Get a Free Demo of Akira AI Platform.

Schedule Demo Try It Free

Building AI-First Enterprise

Platform

  • MLOps Platform
  • AI at the Edge
  • AI Appliance - Multi-Cloud
  • Fast Data Platform

What We Do

  • AI Solutions
  • AI Marketplace
  • Integrations

Resources

  • Blog
  • Use Cases
  • Glossary
  • White Paper

Company

  • About
  • Pricing
  • Contact Us
  • Support
  • Documentation
  • Privacy Policy
  • Terms & Conditions

All Systems Operational

Akira AI

© Copyright 2020 — Akira AI. All Rights Reserved