Each bucket in the Hive is created as a file. Bucket numbering is 1- based. Query optimization happens in two layers known as bucket pruning and partition pruning if bucketing is done on partitioned tables. Taking an example, let us create a partitioned and a bucketed table named “student”, As of Okera’s 2.0 release, we now support Hive bucketing. Spark SQL; Currently released on 09 October 2017: version 2.1.2 c. Developer. HIVE-22332: Hive should ensure valid schema evolution settings since ORC-540. SparkSession in Spark 2.0 provides builtin support for Hive features including the ability to write queries using HiveQL, access to Hive UDFs, and the ability to read data from Hive tables. Delete Hive Tables Let us understand the details of Bucketing in Hive in this article. Hive详解之参数和变量设置 This assumes you have Java installed. 1. to populate bucketed tables in hive. are the types of bucketing in hive Corporate finance Bucketing in Hive - Creation of Bucketed Table in Hive ... Rename has been changed as of version 2.2.0 (HIVE-14909) so that a managed table's HDFS location is moved only if the table is created without a LOCATION clause and under its database directory. It is fixed in datanucleus-rdbms 4.1.16. Also see Interacting with Different Versions of Hive Metastore ). Version 1 of the Iceberg spec defines how to manage large analytic tables using immutable file formats: Parquet, Avro, and ORC. From Beeline or a standard JDBC client connected to Hive, compactions can be seen with the standard SQL: SHOW COMPACTIONS; But this method has a couple of problems: No Filtering. Hive 1.x line will continue to be maintained with Hadoop 1.x.y support. Filter Hive Compactions. All version 1 data and metadata files are valid after upgrading a table to version 2. I have Hive 3.1.0 installed on Centos 7 from HDP 3.1.0 RPM packages ( - 237286 If HDFS block size is 64MB and n% of input size is only 10MB, then 64MB of data is fetched. 0. Spark SQL version 2.3.x and above supports both Spark SQL 1.x and 2.2.x syntax. 根据已存在的表结构,使用like关键字,复制一个表结构一模一样的新表. Table created with file format must be in ORC file format with TBLPROPERTIES (“transactional”=”true”) Table must be CLUSTERED BY with Bucketing. Spark SQL; Apache Software Foundation developed it originally. Hive supports incremental view maintenance, i.e., only refresh data that was affected by the changes in the original source tables. SELECT 1.2. Say you want to … set hive.enforce.bucketing = true; -- (Note: Not needed in Hive 2.x onward) FROM user_id INSERT OVERWRITE TABLE user_info_bucketed PARTITION (ds='2009-02-25') SELECT userid, firstname, lastname WHERE ds='2009-02-25'; hive HIVE-21041: NPE, ParseException in getting schema from logical plan. Apache Hive supports transactional tables which provide ACID guarantees. Starting Version 0.14, Hive supports all ACID properties which enable us to use transactions, create transactional tables, and run queries like Insert, Update, and Delete on tables.In this article, I will explain how to enable and disable ACID Transactions Manager, create a transactional table, and finally performing Insert, Update, and Delete operations. Bucketing is an optimization technique in Apache Spark SQL. The first user is list bucketing pruner and used in pruning phase: 1. For more information, see Dataproc Versioning. Extract the source folder: tar –xzvf apache-hive-1.2.1-src.tar.gz. In addition, it will preserve LLAP cache for existing data in the materialized view. hive > insert overwrite table bucket_userinfo select userid,username from userinfo; 然后hive启动作业分桶导入数据,本例中分两个桶,所以最终会根据userid的奇偶生成两个文件。. Hive 3 achieves atomicity and isolation of operations on transactional tables by using techniques in write, read, insert, create, delete, and update operations that involve delta files. So I put a deep hive body on each hive and then the top to help keep the buckets from getting too hot from the sun. Spark SQL supports the vast majority of Hive features, such as: 1. Looks like this only possible with Tez by setting the property hive.tez.bucket.pruning.. What are the options to achieve the performance improvement like bucket pruning till HDP is available with Hive 2.0.0? move the actually selected version to the "conf" so that it doesn't get lost. Page2 Agenda • Introduction • ORC files • Partitioning vs. Predicate Pushdown • Loading data • Dynamic Partitioning • Bucketing • Optimize Sort Dynamic Partitioning • Manual Distribution • Miscellaneous • Sorting and Predicate pushdown • Debugging • Bloom Filters replace trait related logic with a separate optimizer rule. Instead of connecting to Hive/Beeline CLI and running commands may not be an option for some use cases. Once the data get loaded it automatically, place the data into 4 buckets. CREATE TABLE `testj2`( `id` int, `bn` string, `cn` string, `ad` map, `mi` array< int >) PARTITIONED BY ( `br` string) CLUSTERED BY ( bn) INTO 2 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE TBLPROPERTIES ( 'bucketing_version' = '2'); CREATE TABLE `testj1`( `id` int, `can` string, `cn` string, `ad` map, `av` boolean, `mi` … Table must be CLUSTERED BY with some Buckets. What is Hive Metastore ? Hive ACID (Atomicity, Consistency, Isolation, Durability) brought a lot to Hive capabilities for bucketed Apache ORC managed tables: Streaming ingestion of data; Seamless INSERT/UPDATE/DELETE operations on existing tables. We will use Pyspark to demonstrate the bucketing examples. Data is allocated among a specified number of buckets, according to values derived from one or more bucketing columns. Sequence file: It is a splittable, compressible, and row-oriented file with a general binary format. this was the initial version with the release date as 3 Dec 2013. 27. Complex type construc… In case it’s not done, one may find the number of files that will be generated in the table directory to be not equal to the number of buckets. But update delete in Hive is not automatic and you will need to enable certain properties to enable ACID operation in Hive. The demo shows partition pruning optimization in Spark SQL for Hive partitioned tables in parquet format. Here the CLUSTERED BY is the keyword used to identify the bucketing column. I'm using Hive 3.1.2 and tried to create a bucket with bucket version=2. Moreover, by using Hive we can process structured and semi-structured data in Hadoop. Note. Apache Hive is an open source data warehouse system built on top of Hadoop Haused. We have to enable it by setting value true to the below property in the hive SET hive.enforce.bucketing=TRUE; Step 4 : load data into Bucketed table with Partition Are the hash algorithms of Tez and MR different? Moreover, we can create a bucketed_user table with above-given requirement with the help of the below HiveQL.CREATE TABLE Hive: Loading Data 1. hive> set hive.enforce.bucketing = true; hive> set hive.enforce.bucketing = true; Create a bucketing table by using the following command: -. The provided jars should be the same version as ConfigEntry(key=spark.sql.hive.metastore.version, defaultValue=2.3.7, doc=Version of the Hive metastore. Hive is a tool that allows the implementation of Data Warehouses for Big Data contexts, organizing data into tables, partitions and buckets. For example, here the bucketing column is name and so the SQL syntax has CLUSTERED BY (name).Multiple columns can be specified as bucketing columns in which case, while using hive to insert/update the data in this dataset, by … The demo is a follow-up to Demo: Connecting Spark SQL to Hive Metastore (with Remote Metastore Server). Hive query statements, including: 1.1. Suppose t1 and t2 are 2 bucketed tables and with the number of buckets b1 and b2 respecitvely. (For specific details, you can refer to our documentation .) A classpath in the standard format for both Hive and Hadoop. However, Hive 2.0 and 2.1 metastores use version 4.1.7 and these versions are affected. The Bucketing is commonly used to optimize performance of a join query … Sampling granularity is at the HDFS block size level. Hive Meta store is a database that stores metadata of your hive tables like table name,column name,data types,table location,number of buckets in the table etc. Some studies were conducted for understanding the ways of optimizing the performance of several storage systems for Big Data … Hive has long been one of the industry-leading systems for Data Warehousing in Big Data contexts, mainly organizing data into databases, tables, partitions and buckets, stored on top of an unstructured distributed file system like HDFS. — Default Value: Hive 0.x: false, Hive 1.x: false, Hive 2.x: removed, which effectively makes it always true (HIVE-12331) 0: jdbc:hive2://cdh-vm.dbaglobe.com:10000/def> set hive.enforce.bucketing=true; 3. Once the data get loaded it automatically, place the data into 4 buckets. Hive is a tool that allows the implementation of Data Warehouses for Big Data contexts, organizing data into tables, partitions and buckets. Hive will read data only from some buckets as per the size specified in the sampling query. iii. With the Hive version 0.14 and above, you can perform the update and delete on the Hive tables. Hive supports the text file format by default, and it also supports the binary format sequence files, ORC files, Avro data files, and Parquet files. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. Setting this flag to true will treat legacy timestamps as time zone agnostic. Logical operators (AND, &&, OR, ||, etc) 2.4. Timestamps are hard to interpret. What is Hive Version you worked ? It will automatically sets the number of reduce tasks to be equal to the number of buckets mentioned in the table definition (for … There are two ways if the user still would like to use those reserved keywords as identifiers: (1) use quoted identifiers, (2) set hive.support.sql11.reserved.keywords=false. The command: ‘SET hive.enforce.bucketing=true;’ allows one to have the correct number of reducer while using ‘CLUSTER BY’ clause for bucketing a column. Bucketing is an optimization technique in Apache Spark SQL. “CLUSTERED BY” clause is used to do bucketing in Hive. SORT BY 2. This allows better performance while reading data & when joining two tables. CLUSTER BY 1.5. Hive version 0.14 2. Hortonworks Hadoop Hive; Hortonworks ODBC Driver for Apache Hive version 2.1.5 or later; Resolution Whitelist the short parameter name being used (for above error, this would be execution.engine, not hive.execution.engine) in the Hive driver. Further donated to the Apache Software Foundation, that has maintained it since. Search. hive> load data local inpath '/home/codegyani/hive/emp_details' into table emp_demo; Enable the bucketing by using the following command: -. Advantages and Disadvantages of Hive Partitioning & Bucketing Read More on Official site. Hive tutorial 1 – hive internal and external table, hive ddl, hive partition, hive buckets and hive serializer and deserializer August, 2017 adarsh 2d Comments The concept of a table in Hive is very similar to the table in the relational database. GROUP BY 1.3. 1. The second version of ACID carries several improvements: Performance just as good as non-ACID; Two of the more interesting features I've come across so far have been partitioning and bucketing. The keyword is followed by a list of bucketing columns in braces. Working with Map Reduce version 2.x,3.x a lot more functionalities were introduced, and the bug was solved. To import Hive packages in eclipse, run the following command: mvn eclipse:eclipse. You can specify the Hive-specific file_format and row_format using the OPTIONS clause, which is a case-insensitive string map. Note that we specify a column (user_id) to base the bucketing. Top 50 Apache Hive Interview Questions and Answers (2016) by Knowledge Powerhouse: Apache Hive Query Language in 2 Days: Jump Start Guide (Jump Start In 2 Days Series Book 1) (2016) by Pak Kwan Apache Hive Query Language in 2 Days: Jump Start Guide (Jump Start In 2 Days Series) (Volume 1) (2016) by Pak L Kwan Learn Hive in 1 Day: Complete Guide to Master Apache Hive … With Bucketing in Hive, we can group similar kinds of data and write it to one single file. ACID stands for four traits of database transactions: Atomicity (an operation either succeeds completely or fails, it does not leave partial data), Consistency (once an application performs an operation the results of that operation are visible to it in every subsequent operation), Isolation(an incomplete operation by one user does not cause unexpected side effects for other users), and … Answer (1 of 2): Minimum requisite to perform Hive CRUD using ACID operations is: 1. How to improve performance with bucketing. Hive, Bucketing for the partitioned table. 表结构数据复制. HIVE-22429: Migrated clustered tables using bucketing_version 1 on hive 3 uses bucketing_version 2 for inserts. hive.exec.list.bucketing.default.dir HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME Default directory name used in list bucketing. Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data query and analysis. Hive Bucketing Size based. * A Hive Table: is a fundamental unit of data in Hive that shares a common schema/DDL. Hive version 0.9.0 fixed the bug for timestampWritable.java that was causing data corruption. Use these commands to show table properties in Hive: This command will list all the properties for the Sales table: Show tblproperties Sales; The preceding command will list only the property for numFiles in the Sales table: Show partitions Sales ('numFiles'); Subscriber Access. 1. Here's the test method and its results. Appendix E documents how to default version 2 fields when reading version 1 metadata. In this case, the queries presenting a decrease in … Oct 15, 2019 3 min read devops. In Hive, by default integral values are treated as INT unless they cross the range of INT values as shown in above table. DECIMAL(5,2) represents total of 5 digits, out of which 2 are decimal digits. HIVE-22429 Migrated clustered tables using bucketing_version 1 on hive 3 uses bucketing_version 2 for inserts HIVE-22406:TRUNCATE TABLE fails due MySQL limitations on limit value HIVE-22360 MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than those in table schema The Dutch East India Company (also known by the abbreviation “VOC” in Dutch) was the first publicly listed company ever to pay regular dividends. For hash joins on two tables, the smaller table is broadcasted while the bigger table is streamed. If you are using Hive < 2.x version, you need to set the hive.enforce.bucketing property to true. Hive versions prior to 0.6 just renamed the table in the metastore without moving the HDFS location. File format must be in ORC file format with TBLPROPERTIES(‘transactional’=’true’) 3. Originally, Hive required exactly one file per bucket. We are offering a list of industry-designed Apache Hive interview questions to help you ace your Hive job interview. Alter Table Properties David W. Streever. However, with the help of CLUSTERED BY clause and optional SORTED BY clause in CREATE TABLE statement we can create bucketed tables. Move to the Hive directory: cd apache-hive-1.2.1-src. Compatibility with Apache Hive. Step 2) Loading Data into table sample bucket. Apache Hive: Apache Hive is a data warehouse device constructed on the pinnacle of Apache Hadoop that enables convenient records summarization, ad-hoc queries, and the evaluation of massive datasets saved in a number of databases and file structures that combine with Hadoop, together with the MapR Data Platform with MapR XD and MapR … You can obtain query status information from these files and use the files to troubleshoot query problems. Data is divided into buckets based on a specified column in a table. Starting Version 0.14, Hive supports all ACID properties which enable us to use transactions, create transactional tables, and run queries like Insert, Update, and Delete on tables.In this article, I will explain how to enable and disable ACID Transactions Manager, create a transactional table, and finally performing Insert, Update, and Delete operations. The latest release version may not be available in your Region during this period. 0. How to improve performance with bucketing. (version 2.1.0 and earlier) Answer (1 of 4): Bucketing in hive First, you need to understand the Partitioning concept where we separate the dataset according to some condition and it distributes load horizontally. We are creating 4 buckets overhere. Solution. Please finish it first before this demo. ... Bucketing is applied as default setting. The maximum size of a string data type supported by Hive is 2 GB. For bucket optimization to kick in when joining them: - The 2 tables must be bucketed on the same keys/columns. The structure addresses the following requirements: 1. multiple dimension collection 2. length of each dimension is dynamic. Other related articles are mentioned at … Assuming that”Employees table” already created in Hive system. The concept is same in Scala as well. Currently, Hive SerDes and UDFs are based on Hive 1.2.1, and Spark SQL can be connected to different versions of Hive Metastore (from 0.12.0 to 2.3.3. WATCH KEYNOTES. If true, while inserting into the table, bucketing is enforced. - Must joining on the bucket keys/columns. 2. Bucketing improves performance by shuffling and sorting data prior to downstream operations such as table joins. Data is allocated among a specified number of buckets, according to values derived from one or more bucketing columns. Monday I went to refill the buckets and one hive had barely taken any syrup. Google Dataproc uses image versions to bundle operating system, big data components, and Google Cloud Platform connectors into one package that is deployed on a cluster. hive.metastore – Hive metastore URI (eg thrift://a.b.com:9083 ) ... HBase2Sink is the equivalent of HBaseSink for HBase version 2. 启动Hive(客户端或Server方式)时,可以在命令行添加-hiveconf param=value 来设定参数, Show Bucketing version for ReduceSinkOp in explain extended plan - this helps identify what hashing algorithm is being used by by ReduceSinkOp. Technical strengths include Hadoop, YARN, Mapreduce, Hive, Sqoop, Flume, Pig, HBase, Phoenix, Oozie, Falcon, Kafka, Storm, Spark, MySQL and Java. Loading/inserting data into the Bucketing table would be the same as inserting data into the table. If you are using Hive < 2.x version, you need to set the hive.enforce.bucketing property to true. You don’t have to set this if you are using Hive 2.x or later. Bucketing improves performance by shuffling and sorting data prior to downstream operations such as table joins. If you are using an older version of the hive and using the hive command then jump to exporting table using the Hive command. Step 2) Loading Data into table sample bucket. It was developed at Facebook for the analysis of large amount of data which is coming day to day. Answer (1 of 2): You should not think about Hive as a regular RDBMS, Hive is better suited for batch processing over very large sets of immutable data. Spark SQL is designed to be compatible with the Hive Metastore, SerDes and UDFs. The Bucketing is commonly used to optimize performance of a join query … You don’t have to set this if you are using Hive 2.x or later. Hive bucketing is a simple form of hash partitioning. A table is bucketed on one or more columns with a fixed number of hash buckets. For example, a table definition in Presto syntax looks like this: The bucketing happens within each partition of the table (or across the entire table if it is not partitioned). Presto 320 added continuous integration with Hive 3; Presto 321 added support for Hive bucketing v2 ("bucketing_version"="2") Presto 325 added continuous integration with HDP 3’s Hive 3; Presto 327 added support for reading from insert-only transactional tables, and added compatibility with timestamp values stored in ORC by Hive 3.1 Hive bucketing is generating more … hive // cute pumpkin buckets | flf/saturday sale. #This property is not needed if you are using Hive 2.x or later set hive.enforce.bucketing = true; Bucketing 2.0: Improve Spark SQL Performance by Removing Shuffle. HIVE is supported to create a Hive SerDe table. Then we populate the table. Download Open Source Data Quality and Profiling for free. Hive is a type of framework built on top of Hadoop for data warehousing. 2. Bucketing is an optimization technique in both Spark and Hive that uses buckets (clustering columns) to determine data partitioning and avoid data shuffle.. In this interview questions list, you will learn what a Hive variable is, Hive table types, adding nodes in Hive, concatenation function in Hive, changing column data type, Hive query processor components, and Hive bucketing. Note: The property hive.enforce.bucketing = true similar to hive.exec.dynamic.partition=true property in partitioning. For example, the following list of files represent buckets 0 to 2, respectively: That is why bucketing is often used in conjunction with partitioning. When I created a bucket and checked the bucket file using hdfs dfs -cat, I could see that the hashing result was different. To avoid whole table scan while performing simple random sampling, our algorithm uses bucketing in hive architecture to manage the data stored on Hadoop Distributed File System. Available options are 0.12.0 through 2.3.7 and 3.0.0 through 3.1.2., public=true, version=1.4.0). For a faster query response, the table can be partitioned by (ITEM_TYPE STRING). On this page I'm going to show you how to install the latest version Apache Hive 3.1.2 on Windows 10 using Windows Subsystem for Linux (WSL) Ubuntu distro. Note : Set a property if your version is less than 2.1 version as By default, the bucket is disabled in Hive. Spark SQL Bucketing on DataFrame. Cause Configuration change in the Hive driver. We will use Pyspark to demonstrate the bucketing examples. Alternatively, you can export directly using Hive/Beeline command. It's decided at runtime. After trying with few other storage systems, the Facebook team ultimately chosen Hadoop as storage system for Hive since it is cost effective and scalable. Download Slides. Pastebin is a website where you can store text online for a set period of time. Arithmetic operators (+, -, *, /, %, etc) 2.3. External tables cannot be made ACID tables since the changes on external tables are beyond the control of the compactor (HIVE-13175) This blog post covers the migration of Hive tables and data from version 2.x to 3.x (which is the target version supported by CDP). Working with Map Reduce version 2.x,3.x a lot more functionalities were introduced, and the bug was solved. — hive.enforce.bucketing: Whether bucketing is enforced. this was the initial version with the release date as 3 Dec 2013. Currently released on 18 November 2017: version 2.3.2. To compile Hive with Hadoop 2 binaries, run the following command: mvn clean install -Phadoop-2,dist. Relational operators (=, ⇔, ==, <>, <, >, >=, <=, etc) 2.2. It has plan to replace Hive CLI with beeline CLI. CREATE TABLE users_bucketed_and_partitioned3 ( name STRING, favorite_color STRING, favorite_numbers int ) USING TEXT PARTITIONED BY (favorite_color) CLUSTERED BY(name) SORTED BY … Original Hive bucketing. hive.support.concurrency true (default is false) hive.enforce.bucketing true (default is false) (Not required as of Hive 2.0) hive.exec.dynamic.partition.mode nonstrict (default is strict) Configuration Values to Set for Compaction ️ 2 packs- one with fun colors & one with more classic colors . List bucketing feature will create sub-directory for each skewed-value and a default directory: for non-skewed value. This is detailed video tutorial to understand and learn Hive partitions and bucketing concept. Page1 Hive: Loading Data June 2015 Version 2.0 Ben Leonhardi 2. 4. By Setting this property we will enable dynamic bucketing while loading data into hive table. Some older Hive implementations (pre-3.1.2) wrote Avro timestamps in a UTC-normalized manner, while from version 3.1.0 until 3.1.2 Hive wrote time zone agnostic timestamps. There has been a significant amount of work that has gone into hive to make these transactional tables highly performant. ORDER BY 1.4. Bucketing is an optimization technique in both Spark and Hive that uses buckets (clustering columns) to determine data partitioning and avoid data shuffle.. The concept is same in Scala as well. In this interview questions list, you will learn what a Hive variable is, Hive table types, adding nodes in Hive, concatenation function in Hive, changing column data type, Hive query processor components, and Hive bucketing. Issue while creating a table in Hive. Pastebin.com is the number one paste tool since 2002. In this step, we will see the loading of Data from employees table into table sample bucket. All Hive operators, including: 2.1. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. This config specifies the default name for the default … Spark SQL Bucketing on DataFrame. Bucketing divides the whole data into specified number of small blocks. Bucketing is commonly used in Hive and Spark SQL to improve performance by eliminating Shuffle in Join or group-by-aggregate scenario. Bucketing in Hive. ... Bucketing and sorting are applicable only to persistent tables: peopleDF. This is ideal for a variety of write-once and read-many datasets at Bytedance. * Hive is not designed for online transaction processing and does not offer real-time queries … In case of version 10.x, Hive details are picked from the Hadoop connection in pushdown mode and the advanced Hive/Hadoop Properties can be configured in "Hadoop Connection Custom Properties" field under "Common Attrubutes" tab in the Hadoop connection. hive > dfs - ls / hive / warehouse / … 2.1 Export Using Beeline into HDFS Directory. Bucketing comes into play when partitioning hive data sets into segments is not effective and can overcome over partitioning. SQL standard authority is used as default setup. In the case without meta data schema, it doesn’t generate it anymore. To use these features, you do not need to have an existing Hive setup. Incremental view maintenance will decrease the rebuild step execution time. Therefore each partition, says Technical, will have two files where each of them will be storing the Technical employee’s data. Present release is : release 2.1.1 available on 8 December 2016 , But mostly 0.13 and 0.14 are in live production. Hive version 0.14 and later. The SORTED BY clause ensures local ordering in each bucket, by keeping the rows in each bucket ordered by one or more columns. The files were named such that the bucket number was implicit based on the file’s position within the lexicographic ordering of the file names. Shouldn't it be the same if bucket version=2? The option keys are FILEFORMAT, INPUTFORMAT, OUTPUTFORMAT, SERDE, FIELDDELIM, ESCAPEDELIM, MAPKEYDELIM, and LINEDELIM. These are two different ways of physically… Note: this class is not designed to be used in general but for list bucketing pruner only. 2.3、使用hive自动分桶,这种情况是针对源数据已经导入hive。. new pumpkin buckets for trick or treating, discounted at the mainstore and marketplace . Most of the keywords are reserved through HIVE-6617 in order to reduce the ambiguity in grammar (version 1.2.0 and later). Corporate finance for the pre-industrial world began to emerge in the Italian city-states and the low countries of Europe from the 15th century..
Multimedia Presentation Picture, Dubai International Convention And Exhibition Centre, Curious Palette Chope, Loras College Intramurals, Sandi Morris' Pole Vault Break Video, Marantz Na6006 Vs Denon Dnp-800ne, What Is Tone In Communication, Korean American Last Names, Flying Blue Gold Lounge Access, ,Sitemap,Sitemap