starstarstarstarstar_half
Learn how to utilize some of the most valuable tech skills on the market today, Scala and Spark ! In this course we will show you how to use Scala and Spark to analyze Big Data. Scala and Spark are two of the most in demand skills right now, and with this course you can learn them quickly and easily! This course comes packed with content: Crash Course in Scala Programming Spark and Big Data Ecosystem Overview Using Spark's MLlib for Machine Learning Scale up Spark jobs using Amazon Web Services Learn how to use Databrick's Big Data Platform and much more! This course comes with full projects for you including topics such as analyzing financial data or using machine learning to classify Ecommerce customer behavior! We teach the latest methodologies of Spark 2.0 so you can learn how to use SparkSQL, Spark DataFrames, and Spark's MLlib! After completing this course you will feel comfortable putting Scala and Spark on your resume! Thanks and I will see you inside the course!
    starstarstarstarstar_border
    Welcome everyone to the course. SQL is also popularly referefered to as Sequel .In this course you will learn how to install step by step and communicate with SQL Server database. We will be downloading and installing SQL Server Express 2014 (which is a free download). As a sample, we will use AdventureWorks 2014 as our database. Along the way, we will be getting data from the database using T-SQL to write queries. But AdventureWorks 2014 will be our main focus. As far as T-SQL, we will start with the simple select statements and add new features as we go. We will cover grouping and filtering data as well as sorting data before it is read. This course will provide you with the basic knowledge and skills to install and query SQL Server database. You will also be familiar with SQL Server Management Studio 2014 which is the management tool used to administer and manage SQL Servers. What You Will Learn in this course includes: How to download and install SQL Server How to download and install Sample database How to download and install SQL Server Management Tool (SSMS) What is a database? How to make database records unique with a primary Key How to sort data How to group data How to filter data How to get data from a database How to count records This is a video based course of about 2 hours with lots of practical hands on examples to follow along.
      starstarstarstarstar_half
      New! Completely updated and re-recorded for Spark 3, IntelliJ, Structured Streaming, and a stronger focus on the DataSet API. “Big data" analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark . Employers including Amazon , EBay , NASA JPL , and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You'll learn those same techniques, using your own Windows system right at home. It's easier than you might think, and you'll be learning from an ex-engineer and senior manager from Amazon and IMDb. Spark works best when using the Scala programming language, and this course includes a crash-course in Scala to get you up to speed quickly. For those more familiar with Python however, a Python version of this class is also available: "Taming Big Data with Apache Spark and Python - Hands On". Learn and master the art of framing data analysis problems as Spark problems through over 20 hands-on examples , and then scale them up to run on cloud computing services in this course. Learn the concepts of Spark's Resilient Distributed Datasets, DataFrames, and Datasets. Get a crash course in the Scala programming language Develop and run Spark jobs quickly using Scala, IntelliJ, and SBT Translate complex analysis problems into iterative or multi-stage Spark scripts Scale up to larger data sets using Amazon's Elastic MapReduce service Understand how Hadoop YARN distributes Spark across computing clusters Practice using other Spark technologies, like Spark SQL, DataFrames, DataSets, Spark Streaming, Machine Learning, and GraphX By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes. We'll have some fun along the way. You'll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to SpiderMan? You'll find the answer. This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. over 8 hours of video content is included, with over 20 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX. Enroll now, and enjoy the course! "I studied Spark for the first time using Frank's course "Apache Spark 2 with Scala - Hands On with Big Data!". It was a great starting point for me,  gaining knowledge in Scala and most importantly practical examples of Spark applications. It gave me an understanding of all the relevant Spark core concepts,  RDDs, Dataframes & Datasets, Spark Streaming, AWS EMR. Within a few months of completion, I used the knowledge gained from the course to propose in my current company to  work primarily on Spark applications. Since then I have continued to work with Spark. I would highly recommend any of Franks courses as he simplifies concepts well and his teaching manner is easy to follow and continue with!  " - Joey Faherty
        starstarstarstar_border star_border
        MongoDB is a distributed Database at its core, so high availability, horizontal scaling, and geographic distribution are built in and easy to use. This training will help you master the leading document-oriented NoSQL database.This course i have created for .net developer who want to learn & work on mongo. This course is also worked fine with .Net version 4 and above.
          starstarstarstarstar_half
          LAST UPDATED: November 2020 (Source Code Included for Lectures) Get ready to acquire some seriously marketable programming skills! You can't consider yourself a complete end to end developer until you can code in SQL. Today, data has become the hottest topic in technology and a company's biggest asset is their data. All databases require the language SQL to store and retrieve data. Salaries for junior level SQL Developers are upwards of $70,000 - $90,000 dollars a year! The great thing is, for this course, you do not need any prior experience in programming what so ever. SQL is a different animal and we're going to demystify the language from scratch and prepare you with plenty of progressively challenging assignments so that by the time you've completed the course (in 2 months), you can call your self an Oracle SQL Master! Oracle is the most popular relational database in the world! This course will prepare you to be job-ready in just 1 month of study and practice. All exercises and solutions are in the lectures. In several lectures I ask students to pause the video and complete the assignment before resuming to watch my solution. MAKE SURE YOU WORK OUT THE PROBLEMS ON YOUR OWN BEFORE MOVING ON TO MY SOLUTION!! With over 62,000 enrolled students and a 4.5 star-rating, this is a Udemy best-selling course. Do you have no prior experience in SQL development? This course is perfect for you. Don't take it from me, take it from actual students that took this course: " I am a beginner and the way this course starts is perfect for the person who has no introduction of SQL or Oracle." Do you have prior experience, but need a refresher or to fine-tune your skills? This is the course for you. Again, I'll let my students do the talking: " I had a good base of knowledge from my last employment. This course is constantly proving useful to supercharge my actual knowledge base . Very good one! " Have you taken a SQL course before, but felt confused on certain topics or not completely satisfied in your abilities? A lot of my students had shared similar concerns: " I had previously taken a college course about databases and SQL, but these ten hours of content were more clear and useful than the course and textbook." Topics covered in this course : Basics of Tables SELECT and WHERE Clause WHERE, AND & OR with Operators BETWEEN, IN and NULL Single Table Queries Single Row Functions Grouping Functions GROUP BY and HAVING Clause Joins Inner and Outer Joins EXISTS & NOT EXIST Operators Creating Your Own Tables Using ALTER Creating Tables with SELECT & UPDATE Data DELETE, TRUNCATE, and DROP Commands System Tables, Pseudo Columns & Deleting Duplicates (Newly Added) Views and Other Objects and Commands (Newly Added)
            starstarstarstarstar_border
            Welcome to Natural Language Processing (NLP) Interview Test Series We have created these real-time full practice tests based on our real interview experience. With the help of these practice test you would be able to clear your NLP interview in first attempt. This is the most comprehensive Test Series online to help you ace your Data Science/Natural Language Processing interviews!
              starstarstarstarstar_half
              New! Updated for Spark 3, more hands-on exercises, and a stronger focus on DataFrames and Structured Streaming. “Big data" analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark . Employers including Amazon , EBay , NASA JPL , and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You'll learn those same techniques, using your own Windows system right at home. It's easier than you might think. Learn and master the art of framing data analysis problems as Spark problems through over 20 hands-on examples, and then scale them up to run on cloud computing services in this course. You'll be learning from an ex-engineer and senior manager from Amazon and IMDb. Learn the concepts of Spark's DataFrames and Resilient Distributed Datastores Develop and run Spark jobs quickly using Python Translate complex analysis problems into iterative or multi-stage Spark scripts Scale up to larger data sets using Amazon's Elastic MapReduce service Understand how Hadoop YARN distributes Spark across computing clusters Learn about other Spark technologies, like Spark SQL, Spark Streaming, and GraphX By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes. This course uses the familiar Python programming language ; if you'd rather use Scala to get the best performance out of Spark, see my "Apache Spark with Scala - Hands On with Big Data" course instead. We'll have some fun along the way. You'll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to The Incredible Hulk? You'll find the answer. This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. 7 hours of video content is included, with over 20 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX. Wrangling big data with Apache Spark is an important skill in today's technical world. Enroll now! " I studied "Taming Big Data with Apache Spark and Python" with Frank Kane, and helped me build a great platform for Big Data as a Service for my company. I recommend the course!  " - Cleuton Sampaio De Melo Jr.
                starstarstarstarstar
                MongoDB makes it possible to store and process large sets of data in ways that increase business value. The flexibility of unstructured, schema-less, storage, combined with robust querying and post-processing functionality, make MongoDB a compelling solution for enterprise big data needs. We need to discuss database schemas. Yes, MongoDB is touted as schema-less but here's where we show that proper design is what allows our collections to scale. Indexing is something everyone talks about, but few understand. We'll explain MongoDB indexing, and index properties because a successful indexing strategy is a key to performance and scaling. Finally, we'll talk about CRUD commands from the MongoDB client and how to write effective queries. Taking this course will help you understand supported standards and data types in MongoDB, and best practices to design collections to scale and index them. Also, you will learn some basic CRUD commands. About the Author Micheal Shallop started programming in 1981 on a Tandy TRS-80 Model 1 and hasn't stopped since. He graduated in 1991 from Oklahoma State University with an Honors degree in Computer Science. In his career, he's coded in many programming languages and has used a variety of databases, relational and otherwise. He was the technical author of a patent awarded in 2011 for his work on real-time data collection, aggregation and forecasting in a conventional (automotive) business. He is currently working for designing and writing a back-end, event-driven, object-oriented, data-agnostic framework utilizing AMQP as the data transport vector and PHP 7.1 as the primary language. He has been programming in PHP for MongoDB since 2010 and has been the architect of several systems, mostly back-end frameworks. Micheal is interested in anything with a programming language behind it. Most recently, he has been experimenting with Arduino, programming on the Raspberry Pi, and writing a social media site in Python. He is also technically skilled in RabbitMQ, general database tech, Python, C/C++, Linux
                  star_border star_border star_border star_border star_border
                  This Oracle 11g video based training course from Infinite Skills will teach you the basics of operating an Oracle Database server and environment. You will also discover the basics of how to manage and develop for an Oracle Database, and administer one. This tutorial is designed for the beginner. This training will teach you the basics of what it means to administer an Oracle Database. Throughout the training, you will be able to better understand the path you can continue on from here, whether that is to become a DBA, developer or administrator of Oracle Databases. You will learn about processes and architectural concepts of a database. This video training also covers topics such as storage, data modelling, basic SQL, database tasks, and how all of these apply specifically to Oracle Databases. By the completion of this video tutorial, you will have a fundamental understanding of how an Oracle Database server and environment function, as well as how database concepts apply specifically to an Oracle Database. Working files are included to assist you as you work through this computer software training course.
                    starstarstarstar_border star_border
                    Apache Spark Started as a research project at the University of California in 2009, Apache Spark is currently one of the most widely used analytics engines. No wonder: it can process data on an enormous scale, supports multiple coding languages (you can use Java, Scala, Python, R, and SQL) and runs on its own or in the cloud, as well as on other systems (e.g., Hadoop or Kubernetes). In this Apache Spark tutorial, I will introduce you to one of the most notable use cases of Apache Spark: machine learning. In less than two hours, we will go through every step of a machine learning project that will provide us with an accurate telecom customer churn prediction in the end. This is going to be a fully hands-on experience, so roll up your sleeves and prepare to give it your best! First and foremost, how does Apache Spark machine learning work? Before you learn Apache Spark, you need to know it comes with a few inbuilt libraries. One of them is called MLlib. To put it simply, it allows the Spark Core to perform machine learning tasks – and (as you will see in this Apache Spark tutorial) does it in breathtaking speed. Due to its ability to handle significant amounts of data, Apache Spark is perfect for tasks related to machine learning, as it can ensure more accurate results when training algorithms. Mastering Apache Spark machine learning can also be a skill highly sought after by employers and headhunters: more and more companies get interested in applying machine learning solutions for business analytics, security, or customer service. Hence, this practical Apache Spark tutorial can become your first step towards a lucrative career! Learn Apache Spark by creating a project from A to Z yourself! I am a firm believer that the best way to learn is by doing. That’s why I haven’t included any purely theoretical lectures in this Apache Spark tutorial: you will learn everything on the way and be able to put it into practice straight away. Seeing the way each feature works will help you learn Apache Spark machine learning thoroughly by heart. I will also be providing some materials in ZIP archives. Make sure to download them at the beginning of the course, as you will not be able to continue with the project without it. And that’s not all you’re getting from this course – can you believe it? Apart from Spark itself, I will also introduce you to Databricks – a platform that simplifies handling and organizing data for Spark. It’s been founded by the same team that initially started Spark, too. In this course, I will explain how to create an account on Databricks and use its Notebook feature for writing and organizing your code. After you finish my Apache Spark tutorial, you will have a fully functioning telecom customer churn prediction project. Take the course now, and have a much stronger grasp of machine learning and data analytics in just a few hours! Spark Machine Learning Project (Telecom Customer Churn Prediction) for beginners using Databricks Notebook (Unofficial) (Community edition Server) In this Data Science Machine Learning project, we will create Telecom Customer Churn Prediction Project using Classification Model Logistic Regression, Naive Bayes and One-vs-Rest classifier few of the predictive models. Explore Apache Spark and Machine Learning on the Databricks platform. Launching Spark Cluster Create a Data Pipeline Process that data using a Machine Learning model (Spark ML Library) Hands-on learning Real time Use Case Publish the Project on Web to Impress your recruiter Graphical  Representation of Data using Databricks notebook. Transform structured data using SparkSQL and DataFrames Telecom Customer Churn Prediction a Real time Use Case on Apache Spark About Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems.