New upstream version 15.3.4+ds1
This commit is contained in:
parent
fead0f33c3
commit
a37d4bddc0
234 changed files with 1358 additions and 14778 deletions
|
@ -130,27 +130,6 @@ variables:
|
|||
REGISTRY_HOST: "registry.gitlab.com"
|
||||
REGISTRY_GROUP: "gitlab-org"
|
||||
|
||||
# Preparing custom clone path to reduce space used by all random forks
|
||||
# on GitLab.com's Shared Runners. Our main forks - especially the security
|
||||
# ones - will have this variable overwritten in the project settings, so that
|
||||
# a security-related code or code using our protected variables will be never
|
||||
# stored on the same path as the community forks.
|
||||
# Part of the solution for the `no space left on device` problem described at
|
||||
# https://gitlab.com/gitlab-org/gitlab/issues/197876.
|
||||
#
|
||||
# For this purpose the https://gitlab.com/gitlab-org-forks group was created
|
||||
# to host a placeholder for the `/builds/gitlab-org-forks` path and ensure
|
||||
# that no legitimate project will ever use it and - by mistake - execute its
|
||||
# job on a shared working directory. It also requires proper configuration of
|
||||
# the Runner that executes the job (which was prepared for our shared runners
|
||||
# by https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3977).
|
||||
#
|
||||
# Because of all of that PLEASE DO NOT CHANGE THE PATH.
|
||||
#
|
||||
# For more details and reasoning that brought this change please check
|
||||
# https://gitlab.com/gitlab-org/gitlab/-/merge_requests/24887
|
||||
GIT_CLONE_PATH: "/builds/gitlab-org-forks/${CI_PROJECT_NAME}"
|
||||
|
||||
include:
|
||||
- local: .gitlab/ci/*.gitlab-ci.yml
|
||||
- remote: 'https://gitlab.com/gitlab-org/frontend/untamper-my-lockfile/-/raw/main/templates/merge_request_pipelines.yml'
|
||||
|
|
|
@ -263,6 +263,7 @@ Style/StringConcatenation:
|
|||
- 'spec/models/custom_emoji_spec.rb'
|
||||
- 'spec/models/grafana_integration_spec.rb'
|
||||
- 'spec/models/integrations/campfire_spec.rb'
|
||||
- 'spec/models/integrations/datadog_spec.rb'
|
||||
- 'spec/models/integrations/chat_message/pipeline_message_spec.rb'
|
||||
- 'spec/models/integrations/chat_message/push_message_spec.rb'
|
||||
- 'spec/models/integrations/jenkins_spec.rb'
|
||||
|
|
30
CHANGELOG.md
30
CHANGELOG.md
|
@ -2,6 +2,36 @@
|
|||
documentation](doc/development/changelog.md) for instructions on adding your own
|
||||
entry.
|
||||
|
||||
## 15.3.4 (2022-09-29)
|
||||
|
||||
### Security (15 changes)
|
||||
|
||||
- [Redact user's private email in group member event webhook](gitlab-org/security/gitlab@172b8a57bd4acca14d65a4b7a5fd021babacb146) ([merge request](gitlab-org/security/gitlab!2794))
|
||||
- [Redact secrets from WebHookLogs](gitlab-org/security/gitlab@7394ab9b32a7bd83b98f93e904312e469f34cd9c) ([merge request](gitlab-org/security/gitlab!2737))
|
||||
- [Forbid creating a tag using default branch name](gitlab-org/security/gitlab@1b556c33aa11c32994be562cfea0ff2e5e13a54e) ([merge request](gitlab-org/security/gitlab!2799))
|
||||
- [Sanitize Url and check for valid numerical errorId in error tracking](gitlab-org/security/gitlab@2a5a51b5b2839963fe7084261c8a7fcc6f09f19c) ([merge request](gitlab-org/security/gitlab!2785))
|
||||
- [Add security protection for Github](gitlab-org/security/gitlab@bc23f46dba26bcdf0c773c24081e4ae3597bf751) ([merge request](gitlab-org/security/gitlab!2802))
|
||||
- [Fix leaking emails in WebHookLogs](gitlab-org/security/gitlab@a31a652c331877e0f97269310ec5f1bc6266398f) ([merge request](gitlab-org/security/gitlab!2807))
|
||||
- [Restrict max duration to 1 year for trace display](gitlab-org/security/gitlab@b62fd774b6f311988c7e10f3544f2aeabeab85d1) ([merge request](gitlab-org/security/gitlab!2815))
|
||||
- [Use UntrustedRegexp for upload rewriter](gitlab-org/security/gitlab@2eea36acbc5687aa9806946861e73f2fb11a9654) ([merge request](gitlab-org/security/gitlab!2791))
|
||||
- [Validate httpUrlToRepo to be http or https only](gitlab-org/security/gitlab@0b340ef6d6e54804445916f5b1fa53185de4b1f7) ([merge request](gitlab-org/security/gitlab!2760))
|
||||
- [Respect instance level rule for editing approval rules](gitlab-org/security/gitlab@2d2a7b8652dbd1085fe1bfc0b69138aecdeaf9c8) ([merge request](gitlab-org/security/gitlab!2782))
|
||||
- [Prevent users creating issues in ay project via board/issues controller](gitlab-org/security/gitlab@559b23e6942a650cafa358ea96b7ee549f76fbd6) ([merge request](gitlab-org/security/gitlab!2780))
|
||||
- [Prevent serialization of sensible attributes from JsonCache](gitlab-org/security/gitlab@f712d58af3aeb3f0fe1c56a290188e19fce72ad6) ([merge request](gitlab-org/security/gitlab!2771))
|
||||
- [Update TodoPolicy to handle confidential notes](gitlab-org/security/gitlab@6bd37cd0595bbf4c744a5b212fc41181c9dc88ef) ([merge request](gitlab-org/security/gitlab!2748))
|
||||
- [Enforce group IP restriction on Dependency Proxy](gitlab-org/security/gitlab@cc42b5e91e04e77ade63f1fdb91e88b998c156f7) ([merge request](gitlab-org/security/gitlab!2764))
|
||||
- [Fixes XSS in widget extensions](gitlab-org/security/gitlab@1d10849c7eee6207435bfd223e1f8639b2816c1e) ([merge request](gitlab-org/security/gitlab!2759))
|
||||
|
||||
## 15.3.3 (2022-09-01)
|
||||
|
||||
### Fixed (5 changes)
|
||||
|
||||
- [Skip file removal if GitLab managed replication is disabled](gitlab-org/gitlab@dbec61270621df70775c98946d09deca913bd187) ([merge request](gitlab-org/gitlab!96556)) **GitLab Enterprise Edition**
|
||||
- [Geo: Fix redirects of LFS transfer downloads](gitlab-org/gitlab@98092958c879d1dc9dda0ba2953ba548aa0b93c0) ([merge request](gitlab-org/gitlab!96654)) **GitLab Enterprise Edition**
|
||||
- [Improve blame link feature](gitlab-org/gitlab@163cadb49f96951a0f747d61a8cd1cb92b7d4296) ([merge request](gitlab-org/gitlab!96654))
|
||||
- [Bypass earliest date validation in importing of iteration cadences](gitlab-org/gitlab@66f56eb2551a302d80ca0891ff0bddec1c84f025) ([merge request](gitlab-org/gitlab!96654)) **GitLab Enterprise Edition**
|
||||
- [Fix user recent activity links for work item actions](gitlab-org/gitlab@9d9368545847cf558fad26a64b216a00b2db36b4) ([merge request](gitlab-org/gitlab!96654))
|
||||
|
||||
## 15.3.2 (2022-08-30)
|
||||
|
||||
### Security (17 changes)
|
||||
|
|
|
@ -1 +1 @@
|
|||
15.3.2
|
||||
15.3.4
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
15.3.2
|
||||
15.3.4
|
|
@ -2,20 +2,10 @@
|
|||
import { mapActions, mapGetters } from 'vuex';
|
||||
import glFeatureFlagMixin from '~/vue_shared/mixins/gl_feature_flags_mixin';
|
||||
import { REVIEW_BAR_VISIBLE_CLASS_NAME } from '../constants';
|
||||
import { PREVENT_LEAVING_PENDING_REVIEW } from '../i18n';
|
||||
import PreviewDropdown from './preview_dropdown.vue';
|
||||
import PublishButton from './publish_button.vue';
|
||||
import SubmitDropdown from './submit_dropdown.vue';
|
||||
|
||||
function closeInterrupt(event) {
|
||||
event.preventDefault();
|
||||
|
||||
// This is the correct way to write backwards-compatible beforeunload listeners
|
||||
// https://developer.chrome.com/blog/page-lifecycle-api/#the-beforeunload-event
|
||||
/* eslint-disable-next-line no-return-assign, no-param-reassign */
|
||||
return (event.returnValue = PREVENT_LEAVING_PENDING_REVIEW);
|
||||
}
|
||||
|
||||
export default {
|
||||
components: {
|
||||
PreviewDropdown,
|
||||
|
@ -35,26 +25,8 @@ export default {
|
|||
},
|
||||
mounted() {
|
||||
document.body.classList.add(REVIEW_BAR_VISIBLE_CLASS_NAME);
|
||||
/*
|
||||
* This stuff is a lot trickier than it looks.
|
||||
*
|
||||
* Mandatory reading: https://developer.mozilla.org/en-US/docs/Web/API/Window/beforeunload_event
|
||||
* Some notable sentences:
|
||||
* - "[...] browsers may not display prompts created in beforeunload event handlers unless the
|
||||
* page has been interacted with, or may even not display them at all."
|
||||
* - "Especially on mobile, the beforeunload event is not reliably fired."
|
||||
* - "The beforeunload event is not compatible with the back/forward cache (bfcache) [...]
|
||||
* It is recommended that developers listen for beforeunload only in this scenario, and only
|
||||
* when they actually have unsaved changes, so as to minimize the effect on performance."
|
||||
*
|
||||
* Please ensure that this is really not working before you modify it, because there are a LOT
|
||||
* of scenarios where browser behavior will make it _seem_ like it's not working, but it actually
|
||||
* is under the right combination of contexts.
|
||||
*/
|
||||
window.addEventListener('beforeunload', closeInterrupt, { capture: true });
|
||||
},
|
||||
beforeDestroy() {
|
||||
window.removeEventListener('beforeunload', closeInterrupt, { capture: true });
|
||||
document.body.classList.remove(REVIEW_BAR_VISIBLE_CLASS_NAME);
|
||||
},
|
||||
methods: {
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
import { __ } from '~/locale';
|
||||
|
||||
export const PREVENT_LEAVING_PENDING_REVIEW = __('There are unsubmitted review comments.');
|
|
@ -1,12 +1,9 @@
|
|||
import { isEmpty } from 'lodash';
|
||||
|
||||
import createFlash from '~/flash';
|
||||
import { scrollToElement } from '~/lib/utils/common_utils';
|
||||
import { __ } from '~/locale';
|
||||
|
||||
import { CHANGES_TAB, DISCUSSION_TAB, SHOW_TAB } from '../../../constants';
|
||||
import service from '../../../services/drafts_service';
|
||||
|
||||
import * as types from './mutation_types';
|
||||
|
||||
export const saveDraft = ({ dispatch }, draft) =>
|
||||
|
@ -18,7 +15,6 @@ export const addDraftToDiscussion = ({ commit }, { endpoint, data }) =>
|
|||
.then((res) => res.data)
|
||||
.then((res) => {
|
||||
commit(types.ADD_NEW_DRAFT, res);
|
||||
|
||||
return res;
|
||||
})
|
||||
.catch(() => {
|
||||
|
@ -33,7 +29,6 @@ export const createNewDraft = ({ commit }, { endpoint, data }) =>
|
|||
.then((res) => res.data)
|
||||
.then((res) => {
|
||||
commit(types.ADD_NEW_DRAFT, res);
|
||||
|
||||
return res;
|
||||
})
|
||||
.catch(() => {
|
||||
|
|
31
app/assets/javascripts/blob/blob_blame_link.js
Normal file
31
app/assets/javascripts/blob/blob_blame_link.js
Normal file
|
@ -0,0 +1,31 @@
|
|||
function addBlameLink(containerSelector, linkClass) {
|
||||
const containerEl = document.querySelector(containerSelector);
|
||||
|
||||
if (!containerEl) {
|
||||
return;
|
||||
}
|
||||
|
||||
containerEl.addEventListener('mouseover', (e) => {
|
||||
const isLineLink = e.target.classList.contains(linkClass);
|
||||
if (isLineLink) {
|
||||
const lineLink = e.target;
|
||||
const lineLinkCopy = lineLink.cloneNode(true);
|
||||
lineLinkCopy.classList.remove(linkClass, 'diff-line-num');
|
||||
|
||||
const { lineNumber } = lineLink.dataset;
|
||||
const { blamePath } = document.querySelector('.line-numbers').dataset;
|
||||
const blameLink = document.createElement('a');
|
||||
blameLink.classList.add('file-line-blame');
|
||||
blameLink.href = `${blamePath}#L${lineNumber}`;
|
||||
|
||||
const wrapper = document.createElement('div');
|
||||
wrapper.classList.add('line-links', 'diff-line-num');
|
||||
|
||||
wrapper.appendChild(blameLink);
|
||||
wrapper.appendChild(lineLinkCopy);
|
||||
lineLink.replaceWith(wrapper);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
export default addBlameLink;
|
|
@ -1,7 +1,12 @@
|
|||
import Tracking from '~/tracking';
|
||||
|
||||
function addBlobLinksTracking(containerSelector, eventsToTrack) {
|
||||
const containerEl = document.querySelector(containerSelector);
|
||||
const eventsToTrack = [
|
||||
{ selector: '.file-line-blame', property: 'blame' },
|
||||
{ selector: '.file-line-num', property: 'link' },
|
||||
];
|
||||
|
||||
function addBlobLinksTracking() {
|
||||
const containerEl = document.querySelector('.file-holder');
|
||||
|
||||
if (!containerEl) {
|
||||
return;
|
||||
|
|
|
@ -22,12 +22,16 @@ import AccessorUtils from '~/lib/utils/accessor';
|
|||
import { __ } from '~/locale';
|
||||
import Tracking from '~/tracking';
|
||||
import TimeAgo from '~/vue_shared/components/time_ago_tooltip.vue';
|
||||
import { sanitizeUrl } from '~/lib/utils/url_utility';
|
||||
import { trackErrorListViewsOptions, trackErrorStatusUpdateOptions } from '../utils';
|
||||
import { I18N_ERROR_TRACKING_LIST } from '../constants';
|
||||
import ErrorTrackingActions from './error_tracking_actions.vue';
|
||||
|
||||
export const tableDataClass = 'table-col d-flex d-md-table-cell align-items-center';
|
||||
|
||||
const isValidErrorId = (errorId) => {
|
||||
return /^[0-9]+$/.test(errorId);
|
||||
};
|
||||
export default {
|
||||
FIRST_PAGE: 1,
|
||||
PREV_PAGE: 1,
|
||||
|
@ -202,6 +206,9 @@ export default {
|
|||
this.searchByQuery(text);
|
||||
},
|
||||
getDetailsLink(errorId) {
|
||||
if (!isValidErrorId(errorId)) {
|
||||
return 'about:blank';
|
||||
}
|
||||
return `error_tracking/${errorId}/details`;
|
||||
},
|
||||
goToNextPage() {
|
||||
|
@ -222,7 +229,10 @@ export default {
|
|||
return filter === this.statusFilter;
|
||||
},
|
||||
getIssueUpdatePath(errorId) {
|
||||
return `/${this.projectPath}/-/error_tracking/${errorId}.json`;
|
||||
if (!isValidErrorId(errorId)) {
|
||||
return 'about:blank';
|
||||
}
|
||||
return sanitizeUrl(`/${this.projectPath}/-/error_tracking/${errorId}.json`);
|
||||
},
|
||||
filterErrors(status, label) {
|
||||
this.filterValue = label;
|
||||
|
|
|
@ -4,7 +4,6 @@ import BlobForkSuggestion from '~/blob/blob_fork_suggestion';
|
|||
import BlobLinePermalinkUpdater from '~/blob/blob_line_permalink_updater';
|
||||
import LineHighlighter from '~/blob/line_highlighter';
|
||||
import initBlobBundle from '~/blob_edit/blob_bundle';
|
||||
import addBlobLinksTracking from '~/blob/blob_links_tracking';
|
||||
|
||||
export default () => {
|
||||
new LineHighlighter(); // eslint-disable-line no-new
|
||||
|
@ -16,12 +15,6 @@ export default () => {
|
|||
document.querySelectorAll('.js-data-file-blob-permalink-url, .js-blob-blame-link'),
|
||||
);
|
||||
|
||||
const eventsToTrack = [
|
||||
{ selector: '.file-line-blame', property: 'blame' },
|
||||
{ selector: '.file-line-num', property: 'link' },
|
||||
];
|
||||
addBlobLinksTracking('#blob-content-holder', eventsToTrack);
|
||||
|
||||
const fileBlobPermalinkUrlElement = document.querySelector('.js-data-file-blob-permalink-url');
|
||||
const fileBlobPermalinkUrl =
|
||||
fileBlobPermalinkUrlElement && fileBlobPermalinkUrlElement.getAttribute('href');
|
||||
|
|
|
@ -13,6 +13,7 @@ import glFeatureFlagMixin from '~/vue_shared/mixins/gl_feature_flags_mixin';
|
|||
import WebIdeLink from '~/vue_shared/components/web_ide_link.vue';
|
||||
import CodeIntelligence from '~/code_navigation/components/app.vue';
|
||||
import LineHighlighter from '~/blob/line_highlighter';
|
||||
import addBlameLink from '~/blob/blob_blame_link';
|
||||
import getRefMixin from '../mixins/get_ref';
|
||||
import blobInfoQuery from '../queries/blob_info.query.graphql';
|
||||
import userInfoQuery from '../queries/user_info.query.graphql';
|
||||
|
@ -242,6 +243,7 @@ export default {
|
|||
|
||||
if (type === SIMPLE_BLOB_VIEWER) {
|
||||
new LineHighlighter(); // eslint-disable-line no-new
|
||||
addBlameLink('.file-holder', 'js-line-links');
|
||||
}
|
||||
});
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
<script>
|
||||
import { GlBadge, GlLink, GlSafeHtmlDirective, GlModalDirective } from '@gitlab/ui';
|
||||
import { isArray } from 'lodash';
|
||||
import Actions from '../action_buttons.vue';
|
||||
import StatusIcon from './status_icon.vue';
|
||||
import { generateText } from './utils';
|
||||
|
@ -35,6 +36,20 @@ export default {
|
|||
required: true,
|
||||
},
|
||||
},
|
||||
computed: {
|
||||
subtext() {
|
||||
const { subtext } = this.data;
|
||||
if (subtext) {
|
||||
if (isArray(subtext)) {
|
||||
return subtext.map((t) => generateText(t)).join('<br />');
|
||||
}
|
||||
|
||||
return generateText(subtext);
|
||||
}
|
||||
|
||||
return null;
|
||||
},
|
||||
},
|
||||
methods: {
|
||||
isArray(arr) {
|
||||
return Array.isArray(arr);
|
||||
|
@ -91,11 +106,7 @@ export default {
|
|||
@clickedAction="onClickedAction"
|
||||
/>
|
||||
</div>
|
||||
<p
|
||||
v-if="data.subtext"
|
||||
v-safe-html="generateText(data.subtext)"
|
||||
class="gl-m-0 gl-font-sm"
|
||||
></p>
|
||||
<p v-if="subtext" v-safe-html="subtext" class="gl-m-0 gl-font-sm"></p>
|
||||
</div>
|
||||
</div>
|
||||
<template v-if="data.children && level === 2">
|
||||
|
|
|
@ -35,6 +35,9 @@ const textStyleTags = {
|
|||
[getStartTag('small')]: '<span class="gl-font-sm gl-text-gray-700">',
|
||||
};
|
||||
|
||||
const escapeText = (text) =>
|
||||
document.createElement('div').appendChild(document.createTextNode(text)).parentNode.innerHTML;
|
||||
|
||||
const createText = (text) => {
|
||||
return text
|
||||
.replace(
|
||||
|
@ -61,7 +64,7 @@ const createText = (text) => {
|
|||
|
||||
export const generateText = (text) => {
|
||||
if (typeof text === 'string') {
|
||||
return createText(text);
|
||||
return createText(escapeText(text));
|
||||
} else if (
|
||||
typeof text === 'object' &&
|
||||
typeof text.text === 'string' &&
|
||||
|
@ -69,8 +72,8 @@ export const generateText = (text) => {
|
|||
) {
|
||||
return createText(
|
||||
`${
|
||||
text.prependText ? `${text.prependText} ` : ''
|
||||
}<a class="gl-text-decoration-underline" href="${text.href}">${text.text}</a>`,
|
||||
text.prependText ? `${escapeText(text.prependText)} ` : ''
|
||||
}<a class="gl-text-decoration-underline" href="${text.href}">${escapeText(text.text)}</a>`,
|
||||
);
|
||||
}
|
||||
|
||||
|
|
|
@ -19,25 +19,23 @@ export default {
|
|||
if (errorSummary.errored >= 1 && errorSummary.resolved >= 1) {
|
||||
const improvements = sprintf(
|
||||
n__(
|
||||
'%{strongOpen}%{errors}%{strongClose} point',
|
||||
'%{strongOpen}%{errors}%{strongClose} points',
|
||||
'%{strong_start}%{errors}%{strong_end} point',
|
||||
'%{strong_start}%{errors}%{strong_end} points',
|
||||
resolvedErrors.length,
|
||||
),
|
||||
{
|
||||
errors: resolvedErrors.length,
|
||||
strongOpen: '<strong>',
|
||||
strongClose: '</strong>',
|
||||
},
|
||||
false,
|
||||
);
|
||||
|
||||
const degradations = sprintf(
|
||||
n__(
|
||||
'%{strongOpen}%{errors}%{strongClose} point',
|
||||
'%{strongOpen}%{errors}%{strongClose} points',
|
||||
'%{strong_start}%{errors}%{strong_end} point',
|
||||
'%{strong_start}%{errors}%{strong_end} points',
|
||||
newErrors.length,
|
||||
),
|
||||
{ errors: newErrors.length, strongOpen: '<strong>', strongClose: '</strong>' },
|
||||
{ errors: newErrors.length },
|
||||
false,
|
||||
);
|
||||
return sprintf(
|
||||
|
@ -96,14 +94,11 @@ export default {
|
|||
this.collapsedData.resolvedErrors.map((e) => {
|
||||
return fullData.push({
|
||||
text: `${capitalizeFirstCharacter(e.severity)} - ${e.description}`,
|
||||
subtext: sprintf(
|
||||
s__(`ciReport|in %{open_link}${e.file_path}:${e.line}%{close_link}`),
|
||||
{
|
||||
open_link: `<a class="gl-text-decoration-underline" href="${e.urlPath}">`,
|
||||
close_link: '</a>',
|
||||
},
|
||||
false,
|
||||
),
|
||||
subtext: {
|
||||
prependText: s__(`ciReport|in`),
|
||||
text: `${e.file_path}:${e.line}`,
|
||||
href: e.urlPath,
|
||||
},
|
||||
icon: {
|
||||
name: SEVERITY_ICONS_EXTENSION[e.severity],
|
||||
},
|
||||
|
|
|
@ -63,13 +63,16 @@ export default {
|
|||
if (valid.length) {
|
||||
title = validText;
|
||||
if (invalid.length) {
|
||||
subtitle = sprintf(`<br>%{small_start}${invalidText}%{small_end}`);
|
||||
subtitle = invalidText;
|
||||
}
|
||||
} else {
|
||||
title = invalidText;
|
||||
}
|
||||
|
||||
return `${title}${subtitle}`;
|
||||
return {
|
||||
subject: title,
|
||||
meta: subtitle,
|
||||
};
|
||||
},
|
||||
fetchCollapsedData() {
|
||||
return axios
|
||||
|
@ -152,9 +155,8 @@ export default {
|
|||
}
|
||||
|
||||
return {
|
||||
text: `${title}
|
||||
<br>
|
||||
${subtitle}`,
|
||||
text: title,
|
||||
supportingText: subtitle,
|
||||
icon: { name: iconName },
|
||||
actions,
|
||||
};
|
||||
|
|
|
@ -60,7 +60,7 @@ export const reportSubTextBuilder = ({ suite_errors: suiteErrors, summary }) =>
|
|||
if (suiteErrors?.base) {
|
||||
errors.push(`${i18n.baseReportParsingError} ${suiteErrors.base}`);
|
||||
}
|
||||
return errors.join('<br />');
|
||||
return errors;
|
||||
}
|
||||
return recentFailuresTextBuilder(summary);
|
||||
};
|
||||
|
|
|
@ -3,6 +3,7 @@ import { GlSafeHtmlDirective, GlLoadingIcon } from '@gitlab/ui';
|
|||
import LineHighlighter from '~/blob/line_highlighter';
|
||||
import eventHub from '~/notes/event_hub';
|
||||
import languageLoader from '~/content_editor/services/highlight_js_language_loader';
|
||||
import addBlobLinksTracking from '~/blob/blob_links_tracking';
|
||||
import Tracking from '~/tracking';
|
||||
import {
|
||||
EVENT_ACTION,
|
||||
|
@ -66,6 +67,7 @@ export default {
|
|||
},
|
||||
},
|
||||
async created() {
|
||||
addBlobLinksTracking();
|
||||
this.trackEvent(EVENT_LABEL_VIEWER);
|
||||
|
||||
if (this.unsupportedLanguage) {
|
||||
|
|
|
@ -95,23 +95,14 @@ td.line-numbers {
|
|||
|
||||
.blob-viewer {
|
||||
.line-numbers {
|
||||
min-width: 6rem;
|
||||
// for server-side-rendering
|
||||
.line-links {
|
||||
@include gl-display-flex;
|
||||
|
||||
|
||||
&:first-child {
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
&:last-child {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
}
|
||||
|
||||
// for client
|
||||
&.line-links {
|
||||
min-width: 6rem;
|
||||
border-bottom-left-radius: 0;
|
||||
|
||||
+ pre {
|
||||
|
@ -120,15 +111,15 @@ td.line-numbers {
|
|||
}
|
||||
}
|
||||
|
||||
.line-links {
|
||||
&:hover a::before,
|
||||
&:focus-within a::before {
|
||||
@include gl-visibility-visible;
|
||||
}
|
||||
.line-numbers:not(.line-links) a:hover::before,
|
||||
.line-numbers:not(.line-links) a:focus-within::before,
|
||||
.line-links:hover a::before,
|
||||
.line-links:focus-within a::before {
|
||||
@include gl-visibility-visible;
|
||||
}
|
||||
|
||||
|
||||
.file-line-num {
|
||||
min-width: 4.5rem;
|
||||
@include gl-justify-content-end;
|
||||
@include gl-flex-grow-1;
|
||||
@include gl-pr-3;
|
||||
|
|
|
@ -88,6 +88,12 @@ module EventsHelper
|
|||
end
|
||||
end
|
||||
|
||||
def event_target_path(event)
|
||||
return Gitlab::UrlBuilder.build(event.target, only_path: true) if event.work_item?
|
||||
|
||||
event.target_link_options
|
||||
end
|
||||
|
||||
def event_feed_title(event)
|
||||
words = []
|
||||
words << event.author_name
|
||||
|
|
|
@ -14,6 +14,11 @@ module Integrations
|
|||
raise NotImplementedError
|
||||
end
|
||||
|
||||
# Return the url variables to be used for the webhook.
|
||||
def url_variables
|
||||
raise NotImplementedError
|
||||
end
|
||||
|
||||
# Return whether the webhook should use SSL verification.
|
||||
def hook_ssl_verification
|
||||
if respond_to?(:enable_ssl_verification)
|
||||
|
@ -26,7 +31,11 @@ module Integrations
|
|||
# Create or update the webhook, raising an exception if it cannot be saved.
|
||||
def update_web_hook!
|
||||
hook = service_hook || build_service_hook
|
||||
hook.url = hook_url if hook.url != hook_url # avoid reencryption
|
||||
|
||||
# Avoid reencryption
|
||||
hook.url = hook_url if hook.url != hook_url
|
||||
hook.url_variables = url_variables if hook.url_variables != url_variables
|
||||
|
||||
hook.enable_ssl_verification = hook_ssl_verification
|
||||
hook.save! if hook.changed?
|
||||
hook
|
||||
|
|
|
@ -3,13 +3,16 @@
|
|||
module SafeUrl
|
||||
extend ActiveSupport::Concern
|
||||
|
||||
# Return the URL with obfuscated userinfo
|
||||
# and keeping it intact
|
||||
def safe_url(allowed_usernames: [])
|
||||
return if url.nil?
|
||||
|
||||
uri = URI.parse(url)
|
||||
escaped = Addressable::URI.escape(url)
|
||||
uri = URI.parse(escaped)
|
||||
uri.password = '*****' if uri.password
|
||||
uri.user = '*****' if uri.user && allowed_usernames.exclude?(uri.user)
|
||||
uri.to_s
|
||||
rescue URI::Error
|
||||
Addressable::URI.unescape(uri.to_s)
|
||||
rescue URI::Error, TypeError
|
||||
end
|
||||
end
|
||||
|
|
|
@ -22,7 +22,7 @@ class WebHookLog < ApplicationRecord
|
|||
validates :web_hook, presence: true
|
||||
|
||||
before_save :obfuscate_basic_auth
|
||||
before_save :redact_author_email
|
||||
before_save :redact_user_emails
|
||||
|
||||
def self.recent
|
||||
where('created_at >= ?', 2.days.ago.beginning_of_day)
|
||||
|
@ -54,9 +54,9 @@ class WebHookLog < ApplicationRecord
|
|||
self.url = safe_url
|
||||
end
|
||||
|
||||
def redact_author_email
|
||||
return unless self.request_data.dig('commit', 'author', 'email').present?
|
||||
|
||||
self.request_data['commit']['author']['email'] = _('[REDACTED]')
|
||||
def redact_user_emails
|
||||
self.request_data.deep_transform_values! do |value|
|
||||
value =~ URI::MailTo::EMAIL_REGEXP ? _('[REDACTED]') : value
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
@ -50,7 +50,11 @@ module Integrations
|
|||
|
||||
override :hook_url
|
||||
def hook_url
|
||||
"#{buildkite_endpoint('webhook')}/deliver/#{webhook_token}"
|
||||
"#{buildkite_endpoint('webhook')}/deliver/{webhook_token}"
|
||||
end
|
||||
|
||||
def url_variables
|
||||
{ 'webhook_token' => webhook_token }
|
||||
end
|
||||
|
||||
def execute(data)
|
||||
|
|
|
@ -170,13 +170,17 @@ module Integrations
|
|||
url = api_url.presence || sprintf(URL_TEMPLATE, datadog_domain: datadog_domain)
|
||||
url = URI.parse(url)
|
||||
query = {
|
||||
"dd-api-key" => api_key,
|
||||
"dd-api-key" => 'THIS_VALUE_WILL_BE_REPLACED',
|
||||
service: datadog_service.presence,
|
||||
env: datadog_env.presence,
|
||||
tags: datadog_tags_query_param.presence
|
||||
}.compact
|
||||
url.query = query.to_query
|
||||
url.to_s
|
||||
url.to_s.gsub('THIS_VALUE_WILL_BE_REPLACED', '{api_key}')
|
||||
end
|
||||
|
||||
def url_variables
|
||||
{ 'api_key' => api_key }
|
||||
end
|
||||
|
||||
def execute(data)
|
||||
|
|
|
@ -106,7 +106,11 @@ module Integrations
|
|||
|
||||
override :hook_url
|
||||
def hook_url
|
||||
[drone_url, "/hook", "?owner=#{project.namespace.full_path}", "&name=#{project.path}", "&access_token=#{token}"].join
|
||||
[drone_url, "/hook", "?owner=#{project.namespace.full_path}", "&name=#{project.path}", "&access_token={token}"].join
|
||||
end
|
||||
|
||||
def url_variables
|
||||
{ 'token' => token }
|
||||
end
|
||||
|
||||
override :update_web_hook!
|
||||
|
|
|
@ -69,6 +69,10 @@ module Integrations
|
|||
url.to_s
|
||||
end
|
||||
|
||||
def url_variables
|
||||
{}
|
||||
end
|
||||
|
||||
def self.supported_events
|
||||
%w(push merge_request tag_push)
|
||||
end
|
||||
|
|
|
@ -66,7 +66,11 @@ module Integrations
|
|||
override :hook_url
|
||||
def hook_url
|
||||
base_url = server.presence || 'https://packagist.org'
|
||||
"#{base_url}/api/update-package?username=#{username}&apiToken=#{token}"
|
||||
"#{base_url}/api/update-package?username={username}&apiToken={token}"
|
||||
end
|
||||
|
||||
def url_variables
|
||||
{ 'username' => username, 'token' => token }
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
@ -2159,6 +2159,10 @@ class User < ApplicationRecord
|
|||
(Date.current - created_at.to_date).to_i
|
||||
end
|
||||
|
||||
def webhook_email
|
||||
public_email.presence || _('[REDACTED]')
|
||||
end
|
||||
|
||||
protected
|
||||
|
||||
# override, from Devise::Validatable
|
||||
|
|
|
@ -21,6 +21,12 @@ class IssuablePolicy < BasePolicy
|
|||
enable :reopen_issue
|
||||
end
|
||||
|
||||
# This rule replicates permissions in NotePolicy#can_read_confidential and it's used in
|
||||
# TodoPolicy for performance reasons
|
||||
rule { can?(:reporter_access) | assignee_or_author | admin }.policy do
|
||||
enable :read_confidential_notes
|
||||
end
|
||||
|
||||
rule { can?(:read_merge_request) & assignee_or_author }.policy do
|
||||
enable :update_merge_request
|
||||
enable :reopen_merge_request
|
||||
|
|
|
@ -20,6 +20,7 @@ class NotePolicy < BasePolicy
|
|||
|
||||
condition(:confidential, scope: :subject) { @subject.confidential? }
|
||||
|
||||
# If this condition changes IssuablePolicy#read_confidential_notes should be updated too
|
||||
condition(:can_read_confidential) do
|
||||
access_level >= Gitlab::Access::REPORTER || @subject.noteable_assignee_or_author?(@user) || admin?
|
||||
end
|
||||
|
|
|
@ -5,10 +5,25 @@ class TodoPolicy < BasePolicy
|
|||
condition(:own_todo) do
|
||||
@user && @subject.user_id == @user.id
|
||||
end
|
||||
|
||||
desc "User can read the todo's target"
|
||||
condition(:can_read_target) do
|
||||
@user && @subject.target&.readable_by?(@user)
|
||||
end
|
||||
|
||||
desc "Todo has confidential note"
|
||||
condition(:has_confidential_note, scope: :subject) { @subject&.note&.confidential? }
|
||||
|
||||
desc "User can read the todo's confidential note"
|
||||
condition(:can_read_todo_confidential_note) do
|
||||
@user && @user.can?(:read_confidential_notes, @subject.target)
|
||||
end
|
||||
|
||||
rule { own_todo & can_read_target }.enable :read_todo
|
||||
rule { own_todo & can_read_target }.enable :update_todo
|
||||
rule { can?(:read_todo) }.enable :update_todo
|
||||
|
||||
rule { has_confidential_note & ~can_read_todo_confidential_note }.policy do
|
||||
prevent :read_todo
|
||||
prevent :update_todo
|
||||
end
|
||||
end
|
||||
|
|
|
@ -7,7 +7,9 @@ class BaseProjectService < ::BaseContainerService
|
|||
attr_accessor :project
|
||||
|
||||
def initialize(project:, current_user: nil, params: {})
|
||||
super(container: project, current_user: current_user, params: params)
|
||||
# we need to exclude project params since they may come from external requests. project should always
|
||||
# be passed as part of the service's initializer
|
||||
super(container: project, current_user: current_user, params: params.except(:project, :project_id))
|
||||
|
||||
@project = project
|
||||
end
|
||||
|
|
|
@ -14,7 +14,12 @@ class FileUploader < GitlabUploader
|
|||
include ObjectStorage::Concern
|
||||
prepend ObjectStorage::Extension::RecordsUploads
|
||||
|
||||
MARKDOWN_PATTERN = %r{\!?\[.*?\]\(/uploads/(?<secret>[0-9a-f]{32})/(?<file>.*?)\)}.freeze
|
||||
# This pattern is vulnerable to malicious inputs, so use Gitlab::UntrustedRegexp
|
||||
# to place bounds on execution time
|
||||
MARKDOWN_PATTERN = Gitlab::UntrustedRegexp.new(
|
||||
'!?\[.*?\]\(/uploads/(?P<secret>[0-9a-f]{32})/(?P<file>.*?)\)'
|
||||
)
|
||||
|
||||
DYNAMIC_PATH_PATTERN = %r{.*(?<secret>\b(\h{10}|\h{32}))\/(?<identifier>.*)}.freeze
|
||||
VALID_SECRET_PATTERN = %r{\A\h{10,32}\z}.freeze
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
%span.event-type.d-inline-block.gl-mr-2{ class: event.action_name }
|
||||
= event.action_name
|
||||
%span.event-target-type.gl-mr-2= event.target_type_name
|
||||
= link_to event.target_link_options, class: 'has-tooltip event-target-link gl-mr-2', title: event.target_title do
|
||||
= link_to event_target_path(event), class: 'has-tooltip event-target-link gl-mr-2', title: event.target_title do
|
||||
= event.target.reference_link_text
|
||||
- unless event.milestone?
|
||||
%span.event-target-title.gl-text-overflow-ellipsis.gl-overflow-hidden.gl-mr-2{ dir: "auto" }
|
||||
|
|
|
@ -1,17 +1,14 @@
|
|||
#blob-content.file-content.code.js-syntax-highlight
|
||||
- offset = defined?(first_line_number) ? first_line_number : 1
|
||||
.line-numbers{ class: "gl-p-0\!" }
|
||||
- blame_path = project_blame_path(@project, tree_join(@ref, blob.path))
|
||||
.line-numbers{ class: "gl-px-0!", data: { blame_path: blame_path } }
|
||||
- if blob.data.present?
|
||||
- link = blob_link if defined?(blob_link)
|
||||
- blame_link = project_blame_path(@project, tree_join(@ref, blob.path))
|
||||
- blob.data.each_line.each_with_index do |_, index|
|
||||
- i = index + offset
|
||||
-# We're not using `link_to` because it is too slow once we get to thousands of lines.
|
||||
.line-links.diff-line-num
|
||||
- if Feature.enabled?(:file_line_blame)
|
||||
%a.file-line-blame{ href: "#{blame_link}#L#{i}" }
|
||||
%a.file-line-num{ href: "#{link}#L#{i}", id: "L#{i}", 'data-line-number' => i }
|
||||
= i
|
||||
%a.file-line-num.diff-line-num{ class: ("js-line-links" if Feature.enabled?(:file_line_blame)), href: "#{link}#L#{i}", id: "L#{i}", 'data-line-number' => i }
|
||||
= i
|
||||
- highlight = defined?(highlight_line) && highlight_line ? highlight_line - offset : nil
|
||||
.blob-content{ data: { blob_id: blob.id, path: blob.path, highlight_line: highlight, qa_selector: 'file_content' } }
|
||||
%pre.code.highlight
|
||||
|
|
70
data/whats_new/2022082200001_15_03.yml
Normal file
70
data/whats_new/2022082200001_15_03.yml
Normal file
|
@ -0,0 +1,70 @@
|
|||
- name: "Create tasks in issues"
|
||||
description: |
|
||||
Tasks provide a robust way to refine an issue into smaller, discrete work units. Previously in GitLab, you could break down an issue into smaller parts using markdown checklists within the description. However, these checklist items could not be easily assigned, labeled, or managed anywhere outside of the description field.
|
||||
|
||||
You can now create tasks within issues from the Child Items widget. Then, you can open the task directly within the issue to quickly update the title, set the weight, or add a description. Tasks break down work within projects for GitLab Free and increase the planning hierarchy for our GitLab Premium customers to three levels (epic, issue, and task). In our next iteration, you will be able to add labels, milestones, and iterations to each task.
|
||||
|
||||
Tasks represent our first step toward evolving issues, epics, incidents, requirements, and test cases to [work items](https://docs.gitlab.com/ee/development/work_items.html). If you have feedback or suggestions about tasks, please comment on [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/363613).
|
||||
stage: plan
|
||||
self-managed: true
|
||||
gitlab-com: true
|
||||
available_in: [Free, Premium, Ultimate]
|
||||
documentation_link: https://docs.gitlab.com/ee/user/tasks.html
|
||||
image_url: https://about.gitlab.com/images/15_3/create-tasks.gif
|
||||
published_at: 2022-08-22
|
||||
release: 15.3
|
||||
- name: "GitOps features are now free"
|
||||
description: |
|
||||
When you use GitOps to update a Kubernetes cluster, also called a pull-based deployment, you get an improved security model, better scalability and stability.
|
||||
|
||||
The GitLab agent for Kubernetes has supported [GitOps workflows](https://docs.gitlab.com/ee/user/clusters/agent/gitops.html) from its initial release, but until now, the functionality was available only if you had a GitLab Premium or Ultimate subscription. Now if you have a Free subscription, you also get pull-based deployment support. The features available in GitLab Free should serve small, high-trust teams or be suitable to test the agent before upgrading to a higher tier.
|
||||
|
||||
In the future, we plan to add [built-in multi-tenant support](https://gitlab.com/gitlab-org/gitlab/-/issues/337904) for Premium subscriptions. This feature would be similar to the impersonation feature already available for the [CI/CD workflow](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#restrict-project-and-group-access-by-using-impersonation).
|
||||
stage: configure
|
||||
self-managed: true
|
||||
gitlab-com: true
|
||||
available_in: [Free, Premium, Ultimate]
|
||||
documentation_link: https://docs.gitlab.com/ee/user/clusters/agent/gitops.html
|
||||
image_url: https://img.youtube.com/vi/jgVxOnMfOZA/hqdefault.jpg
|
||||
published_at: 2022-08-22
|
||||
release: 15.3
|
||||
- name: "Submit merge request review with summary comment"
|
||||
description: |
|
||||
When you finish reviewing a merge request, there are probably some common things that you do, like summarizing your review for others or approving the changes if they look good to you. Those common tasks are now quicker and easier: when you submit your review, you can add a summary comment along with any [quick actions](https://docs.gitlab.com/ee/user/project/quick_actions.html) like `/approve`.
|
||||
stage: create
|
||||
self-managed: true
|
||||
gitlab-com: true
|
||||
available_in: [Free, Premium, Ultimate]
|
||||
documentation_link: https://docs.gitlab.com/ee/user/project/merge_requests/reviews/#submit-a-review
|
||||
image_url: https://about.gitlab.com/images/15_3/create-mr-review-summary.png
|
||||
published_at: 2022-08-22
|
||||
release: 15.3
|
||||
- name: "Define password complexity requirements"
|
||||
description: |
|
||||
GitLab administrators can now define password complexity requirements in addition to minimum password length. For new passwords, you can now require:
|
||||
|
||||
- Numbers.
|
||||
- Uppercase letters.
|
||||
- Lowercase letters.
|
||||
- Symbols.
|
||||
|
||||
Complex passwords are less likely to be compromised, and the ability to configure password complexity requirements helps administrators enforce their password policies.
|
||||
stage: manage
|
||||
self-managed: true
|
||||
gitlab-com: false
|
||||
available_in: [Premium, Ultimate]
|
||||
documentation_link: https://docs.gitlab.com/ee/user/admin_area/settings/sign_up_restrictions.html#password-complexity-requirements
|
||||
image_url: https://about.gitlab.com/images/15_3/manage-password-complexity-policy.png
|
||||
published_at: 2022-08-22
|
||||
release: 15.3
|
||||
- name: "Maintain SAML Group Links with API"
|
||||
description: |
|
||||
Until now, SAML group links had to be configured in the UI. Now, you can manage SAML group links programmatically using the API so you can automate SAML groups management.
|
||||
stage: manage
|
||||
self-managed: true
|
||||
gitlab-com: true
|
||||
available_in: [Premium, Ultimate]
|
||||
documentation_link: https://docs.gitlab.com/ee/api/groups.html#saml-group-links
|
||||
image_url: https://img.youtube.com/vi/Pft61UFM5LM/hqdefault.jpg
|
||||
published_at: 2022-08-22
|
||||
release: 15.3
|
|
@ -1444,7 +1444,7 @@ response attributes:
|
|||
| Attribute | Type | Description |
|
||||
|:-------------------|:-------|:-------------------------------------------------------------------------------------|
|
||||
| `[].name` | string | Name of the SAML group |
|
||||
| `[].access_level` | string | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
| `[].access_level` | integer | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
|
||||
Example request:
|
||||
|
||||
|
@ -1458,11 +1458,11 @@ Example response:
|
|||
[
|
||||
{
|
||||
"name": "saml-group-1",
|
||||
"access_level": "Guest"
|
||||
"access_level": 10
|
||||
},
|
||||
{
|
||||
"name": "saml-group-2",
|
||||
"access_level": "Maintainer"
|
||||
"access_level": 40
|
||||
}
|
||||
]
|
||||
```
|
||||
|
@ -1488,7 +1488,7 @@ response attributes:
|
|||
| Attribute | Type | Description |
|
||||
|:---------------|:-------|:-------------------------------------------------------------------------------------|
|
||||
| `name` | string | Name of the SAML group |
|
||||
| `access_level` | string | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
| `access_level` | integer | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
|
||||
Example request:
|
||||
|
||||
|
@ -1501,7 +1501,7 @@ Example response:
|
|||
```json
|
||||
{
|
||||
"name": "saml-group-1",
|
||||
"access_level": "Guest"
|
||||
"access_level": 10
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -1519,7 +1519,7 @@ Supported attributes:
|
|||
|:-------------------|:---------------|:---------|:-------------------------------------------------------------------------------------|
|
||||
| `id` | integer/string | yes | ID or [URL-encoded path of the group](index.md#namespaced-path-encoding) |
|
||||
| `saml_group_name` | string | yes | Name of a SAML group |
|
||||
| `access_level` | string | yes | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
| `access_level` | integer | yes | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
|
||||
If successful, returns [`201`](index.md#status-codes) and the following
|
||||
response attributes:
|
||||
|
@ -1527,7 +1527,7 @@ response attributes:
|
|||
| Attribute | Type | Description |
|
||||
|:---------------|:-------|:-------------------------------------------------------------------------------------|
|
||||
| `name` | string | Name of the SAML group |
|
||||
| `access_level` | string | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
| `access_level` | integer | Minimum [access level](members.md#valid-access-levels) for members of the SAML group |
|
||||
|
||||
Example request:
|
||||
|
||||
|
@ -1540,7 +1540,7 @@ Example response:
|
|||
```json
|
||||
{
|
||||
"name": "saml-group-1",
|
||||
"access_level": "Guest"
|
||||
"access_level": 10
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -274,6 +274,7 @@ listed in the descriptions of the relevant settings.
|
|||
| `container_registry_token_expire_delay` | integer | no | Container Registry token duration in minutes. |
|
||||
| `package_registry_cleanup_policies_worker_capacity` | integer | no | Number of workers assigned to the packages cleanup policies. |
|
||||
| `deactivate_dormant_users` | boolean | no | Enable [automatic deactivation of dormant users](../user/admin_area/moderate_users.md#automatically-deactivate-dormant-users). |
|
||||
| `deactivate_dormant_users_period` | integer | no | Length of time (in days) after which a user is considered dormant. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/336747) in GitLab 15.3. |
|
||||
| `default_artifacts_expire_in` | string | no | Set the default expiration time for each job's artifacts. |
|
||||
| `default_branch_name` | string | no | [Instance-level custom initial branch name](../user/project/repository/branches/default.md#instance-level-custom-initial-branch-name) ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/225258) in GitLab 13.2). |
|
||||
| `default_branch_protection` | integer | no | Determine if developers can push to the default branch. Can take: `0` _(not protected, both users with the Developer role or Maintainer role can push new commits and force push)_, `1` _(partially protected, users with the Developer role or Maintainer role can push new commits, but cannot force push)_ or `2` _(fully protected, users with the Developer or Maintainer role cannot push new commits, but users with the Developer or Maintainer role can; no one can force push)_ as a parameter. Default is `2`. |
|
||||
|
|
|
@ -200,6 +200,10 @@ The following table lists project permissions available for each role:
|
|||
| [Security dashboard](application_security/security_dashboard/index.md):<br>Use security dashboard | | | ✓ | ✓ | ✓ |
|
||||
| [Security dashboard](application_security/security_dashboard/index.md):<br>View vulnerability | | | ✓ | ✓ | ✓ |
|
||||
| [Security dashboard](application_security/security_dashboard/index.md):<br>View vulnerability findings in [dependency list](application_security/dependency_list/index.md) | | | ✓ | ✓ | ✓ |
|
||||
| [Tasks](tasks.md):<br>Create (*18*) | ✓ | ✓ | ✓ | ✓ | ✓ |
|
||||
| [Tasks](tasks.md):<br>Edit | | ✓ | ✓ | ✓ | ✓ |
|
||||
| [Tasks](tasks.md):<br>Remove from issue | | ✓ | ✓ | ✓ | ✓ |
|
||||
| [Tasks](tasks.md):<br>Delete (*22*) | | | | | ✓ |
|
||||
| [Terraform](infrastructure/index.md):<br>Read Terraform state | | | ✓ | ✓ | ✓ |
|
||||
| [Terraform](infrastructure/index.md):<br>Manage Terraform state | | | | ✓ | ✓ |
|
||||
| [Test cases](../ci/test_cases/index.md):<br>Archive | | ✓ | ✓ | ✓ | ✓ |
|
||||
|
@ -235,10 +239,11 @@ The following table lists project permissions available for each role:
|
|||
16. In GitLab 14.5 or later, Guests are not allowed to [create incidents](../operations/incident_management/incidents.md#incident-creation).
|
||||
In GitLab 15.1 and later, a Guest who created an issue that was promoted to an incident cannot edit, close, or reopen their incident.
|
||||
17. In projects that accept contributions from external members, users can create, edit, and close their own merge requests.
|
||||
18. Authors and assignees of issues can modify the title and description even if they don't have the Reporter role.
|
||||
18. Authors and assignees can modify the title and description even if they don't have the Reporter role.
|
||||
19. Authors and assignees can close and reopen issues even if they don't have the Reporter role.
|
||||
20. The ability to view the Container Registry and pull images is controlled by the [Container Registry's visibility permissions](packages/container_registry/index.md#container-registry-visibility-permissions).
|
||||
21. Maintainers cannot create, demote, or remove Owners, and they cannot promote users to the Owner role. They also cannot approve Owner role access requests.
|
||||
22. Authors of tasks can delete them even if they don't have the Owner role, but they have to have at least the Guest role for the project.
|
||||
|
||||
<!-- markdownlint-enable MD029 -->
|
||||
|
||||
|
|
|
@ -38,25 +38,64 @@ to work items and adding custom work item types, visit
|
|||
[epic 6033](https://gitlab.com/groups/gitlab-org/-/epics/6033) or
|
||||
[Plan direction page](https://about.gitlab.com/direction/plan/).
|
||||
|
||||
## View tasks
|
||||
|
||||
View tasks in issues, in the **Child items** section.
|
||||
|
||||
You can also [filter the list of issues](project/issues/managing_issues.md#filter-the-list-of-issues)
|
||||
for `Type = task`.
|
||||
|
||||
## Create a task
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- You must have at least the Guest role for the project, or the project must be public.
|
||||
|
||||
To create a task:
|
||||
|
||||
1. In an issue description, create a [task list](markdown.md#task-lists).
|
||||
1. Hover over a task item and select **Create task** (**{doc-new}**).
|
||||
1. In an issue description, in the **Child items** section, select **Add a task**.
|
||||
1. Enter the task title.
|
||||
1. Select **Create task**.
|
||||
|
||||
## Edit a task
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- You must have at least the Reporter role for the project.
|
||||
|
||||
To edit a task:
|
||||
|
||||
1. In the issue description, view the task links.
|
||||
1. Select a link. The task is displayed.
|
||||
- To edit the description, select **Edit**, then select **Save**.
|
||||
- To edit the title or state, make your changes, then select any area outside the field. The changes are saved automatically.
|
||||
1. In the issue description, in the **Child items** section, select the task you want to edit.
|
||||
The task window opens.
|
||||
1. Optional. To edit the title, select it and make your changes.
|
||||
1. Optional. To edit the description, select the edit icon (**{pencil}**), make your changes, and
|
||||
select **Save**.
|
||||
1. Select the close icon (**{close}**).
|
||||
|
||||
## Remove a task from an issue
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- You must have at least the Reporter role for the project.
|
||||
|
||||
You can remove a task from an issue. The task is not deleted, but the two are no longer connected.
|
||||
It's not possible to connect them again.
|
||||
|
||||
To remove a task from an issue:
|
||||
|
||||
1. In the issue description, in the **Child items** section, next to the task you want to remove, select the options menu (**{ellipsis_v}**).
|
||||
1. Select **Remove task**.
|
||||
|
||||
## Delete a task
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- You must either:
|
||||
- Be the author of the task and have at least the Guest role for the project.
|
||||
- Have the Owner role for the project.
|
||||
|
||||
To delete a task:
|
||||
|
||||
1. In the issue description, select the task.
|
||||
1. From the options menu (**{ellipsis_v}**), select **Delete task**.
|
||||
1. In the issue description, in the **Child items** section, select the task you want to edit.
|
||||
1. In the task window, in the options menu (**{ellipsis_v}**), select **Delete task**.
|
||||
1. Select **OK**.
|
||||
|
|
11
elasticsearch-rails/.gitignore
vendored
11
elasticsearch-rails/.gitignore
vendored
|
@ -1,11 +0,0 @@
|
|||
.DS_Store
|
||||
*.log
|
||||
tmp/
|
||||
.idea/*
|
||||
|
||||
.yardoc/
|
||||
_yardoc/
|
||||
coverage/
|
||||
rdoc/
|
||||
doc/
|
||||
Gemfile.lock
|
|
@ -1,66 +0,0 @@
|
|||
# -----------------------------------------------------------------------------
|
||||
# Configuration file for http://travis-ci.org/elasticsearch/elasticsearch-rails
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
dist: trusty
|
||||
|
||||
sudo: required
|
||||
|
||||
language: ruby
|
||||
|
||||
services:
|
||||
- mongodb
|
||||
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
- travis
|
||||
- 5.x
|
||||
- 6.x
|
||||
- 2.x
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- rvm: 2.2
|
||||
jdk: oraclejdk8
|
||||
env: RAILS_VERSIONS=3.0
|
||||
|
||||
- rvm: 2.3.8
|
||||
jdk: oraclejdk8
|
||||
env: RAILS_VERSIONS=5.0
|
||||
|
||||
- rvm: 2.6.1
|
||||
jdk: oraclejdk8
|
||||
env: RAILS_VERSIONS=4.0,5.0
|
||||
|
||||
- rvm: jruby-9.2.5.0
|
||||
jdk: oraclejdk8
|
||||
env: RAILS_VERSIONS=5.0
|
||||
|
||||
env:
|
||||
global:
|
||||
- ELASTICSEARCH_VERSION=6.4.0
|
||||
- QUIET=true
|
||||
|
||||
|
||||
before_install:
|
||||
- wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ELASTICSEARCH_VERSION}.deb
|
||||
- wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ELASTICSEARCH_VERSION}.deb.sha512
|
||||
- shasum -a 512 -c elasticsearch-${ELASTICSEARCH_VERSION}.deb.sha512
|
||||
- sudo dpkg -i --force-confnew elasticsearch-${ELASTICSEARCH_VERSION}.deb
|
||||
- sudo service elasticsearch start
|
||||
- gem update --system
|
||||
- gem update bundler
|
||||
- gem --version
|
||||
- bundle version
|
||||
|
||||
install:
|
||||
- bundle install
|
||||
- rake bundle:clean
|
||||
- rake bundle:install
|
||||
|
||||
script:
|
||||
- rake test:all
|
||||
|
||||
notifications:
|
||||
disable: true
|
|
@ -1,21 +0,0 @@
|
|||
*.gem
|
||||
*.rbc
|
||||
.bundle
|
||||
.config
|
||||
.yardoc
|
||||
Gemfile.lock
|
||||
InstalledFiles
|
||||
_yardoc
|
||||
coverage
|
||||
doc/
|
||||
lib/bundler/man
|
||||
pkg
|
||||
rdoc
|
||||
spec/reports
|
||||
test/tmp
|
||||
test/version_tmp
|
||||
tmp
|
||||
|
||||
gemfiles/3.0.gemfile.lock
|
||||
gemfiles/4.0.gemfile.lock
|
||||
gemfiles/5.0.gemfile.lock
|
|
@ -1,74 +0,0 @@
|
|||
## 0.1.9
|
||||
|
||||
* Added a `suggest` method to wrap the suggestions in response
|
||||
* Added the `:includes` option to Adapter::ActiveRecord::Records for eagerly loading associated models
|
||||
* Delegated `max_pages` method properly for Kaminari's `next_page`
|
||||
* Fixed `#dup` behaviour for Elasticsearch::Model
|
||||
* Fixed typos in the README and examples
|
||||
|
||||
## 0.1.8
|
||||
|
||||
* Added "default per page" methods for pagination with multi model searches
|
||||
* Added a convenience accessor for the `aggregations` part of response
|
||||
* Added a full example with mapping for the completion suggester
|
||||
* Added an integration test for paginating multiple models
|
||||
* Added proper support for the new "multi_fields" in the mapping DSL
|
||||
* Added the `no_timeout` option for `__find_in_batches` in the Mongoid adapter
|
||||
* Added, that index settings can be loaded from any object that responds to `:read`
|
||||
* Added, that index settings/mappings can be loaded from a YAML or JSON file
|
||||
* Added, that String pagination parameters are converted to numbers
|
||||
* Added, that empty block is not required for setting mapping options
|
||||
* Added, that on MyModel#import, an exception is raised if the index does not exists
|
||||
* Changed the Elasticsearch port in the Mongoid example to 9200
|
||||
* Cleaned up the tests for multiple fields/properties in mapping DSL
|
||||
* Fixed a bug where continuous `#save` calls emptied the `@__changed_attributes` variable
|
||||
* Fixed a buggy test introduced in #335
|
||||
* Fixed incorrect deserialization of records in the Multiple adapter
|
||||
* Fixed incorrect examples and documentation
|
||||
* Fixed unreliable order of returned results/records in the integration test for the multiple adapter
|
||||
* Fixed, that `param_name` is used when paginating with WillPaginate
|
||||
* Fixed the problem where `document_type` configuration was not propagated to mapping [6 months ago by Miguel Ferna
|
||||
* Refactored the code in `__find_in_batches` to use Enumerable#each_slice
|
||||
* Refactored the string queries in multiple_models_test.rb to avoid quote escaping
|
||||
|
||||
## 0.1.7
|
||||
|
||||
* Improved examples and instructions in README and code annotations
|
||||
* Prevented index methods to swallow all exceptions
|
||||
* Added the `:validate` option to the `save` method for models
|
||||
* Added support for searching across multiple models (elastic/elasticsearch-rails#345),
|
||||
including documentation, examples and tests
|
||||
|
||||
## 0.1.6
|
||||
|
||||
* Improved documentation
|
||||
* Added dynamic getter/setter (block/proc) for `MyModel.index_name`
|
||||
* Added the `update_document_attributes` method
|
||||
* Added, that records to import can be limited by the `query` option
|
||||
|
||||
## 0.1.5
|
||||
|
||||
* Improved documentation
|
||||
* Fixes and improvements to the "will_paginate" integration
|
||||
* Added a `:preprocess` option to the `import` method
|
||||
* Changed, that attributes are fetched from `as_indexed_json` in the `update_document` method
|
||||
* Added an option to the import method to return an array of error messages instead of just count
|
||||
* Fixed many problems with dependency hell
|
||||
* Fixed tests so they run on Ruby 2.2
|
||||
|
||||
## 0.1.2
|
||||
|
||||
* Properly delegate existence methods like `result.foo?` to `result._source.foo`
|
||||
* Exception is raised when `type` is not passed to Mappings#new
|
||||
* Allow passing an ActiveRecord scope to the `import` method
|
||||
* Added, that `each_with_hit` and `map_with_hit` in `Elasticsearch::Model::Response::Records` call `to_a`
|
||||
* Added support for [`will_paginate`](https://github.com/mislav/will_paginate) pagination library
|
||||
* Added the ability to transform models during indexing
|
||||
* Added explicit `type` and `id` methods to Response::Result, aliasing `_type` and `_id`
|
||||
|
||||
## 0.1.1
|
||||
|
||||
* Improved documentation and tests
|
||||
* Fixed Kaminari implementation bugs and inconsistencies
|
||||
|
||||
## 0.1.0 (Initial Version)
|
|
@ -1,9 +0,0 @@
|
|||
source 'https://rubygems.org'
|
||||
|
||||
# Specify your gem's dependencies in elasticsearch-model.gemspec
|
||||
gemspec
|
||||
|
||||
group :development, :testing do
|
||||
gem 'rspec'
|
||||
gem 'pry-nav'
|
||||
end
|
|
@ -1,13 +0,0 @@
|
|||
Copyright (c) 2014 Elasticsearch
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
|
@ -1,774 +0,0 @@
|
|||
# Elasticsearch::Model
|
||||
|
||||
The `elasticsearch-model` library builds on top of the
|
||||
the [`elasticsearch`](https://github.com/elastic/elasticsearch-ruby) library.
|
||||
|
||||
It aims to simplify integration of Ruby classes ("models"), commonly found
|
||||
e.g. in [Ruby on Rails](http://rubyonrails.org) applications, with the
|
||||
[Elasticsearch](http://www.elasticsearch.org) search and analytics engine.
|
||||
|
||||
## Compatibility
|
||||
|
||||
This library is compatible with Ruby 1.9.3 and higher.
|
||||
|
||||
The library version numbers follow the Elasticsearch major versions, and the `master` branch
|
||||
is compatible with the Elasticsearch `master` branch, therefore, with the next major version.
|
||||
|
||||
| Rubygem | | Elasticsearch |
|
||||
|:-------------:|:-:| :-----------: |
|
||||
| 0.1 | → | 1.x |
|
||||
| 2.x | → | 2.x |
|
||||
| 5.x | → | 5.x |
|
||||
| 6.x | → | 6.x |
|
||||
| master | → | master |
|
||||
|
||||
## Installation
|
||||
|
||||
Install the package from [Rubygems](https://rubygems.org):
|
||||
|
||||
gem install elasticsearch-model
|
||||
|
||||
To use an unreleased version, either add it to your `Gemfile` for [Bundler](http://bundler.io):
|
||||
|
||||
gem 'elasticsearch-model', git: 'git://github.com/elastic/elasticsearch-rails.git', branch: '5.x'
|
||||
|
||||
or install it from a source code checkout:
|
||||
|
||||
git clone https://github.com/elastic/elasticsearch-rails.git
|
||||
cd elasticsearch-rails/elasticsearch-model
|
||||
bundle install
|
||||
rake install
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Let's suppose you have an `Article` model:
|
||||
|
||||
```ruby
|
||||
require 'active_record'
|
||||
ActiveRecord::Base.establish_connection( adapter: 'sqlite3', database: ":memory:" )
|
||||
ActiveRecord::Schema.define(version: 1) { create_table(:articles) { |t| t.string :title } }
|
||||
|
||||
class Article < ActiveRecord::Base; end
|
||||
|
||||
Article.create title: 'Quick brown fox'
|
||||
Article.create title: 'Fast black dogs'
|
||||
Article.create title: 'Swift green frogs'
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
To add the Elasticsearch integration for this model, require `elasticsearch/model`
|
||||
and include the main module in your class:
|
||||
|
||||
```ruby
|
||||
require 'elasticsearch/model'
|
||||
|
||||
class Article < ActiveRecord::Base
|
||||
include Elasticsearch::Model
|
||||
end
|
||||
```
|
||||
|
||||
This will extend the model with functionality related to Elasticsearch.
|
||||
|
||||
#### Feature Extraction Pattern
|
||||
|
||||
Instead of including the `Elasticsearch::Model` module directly in your model,
|
||||
you can include it in a "concern" or "trait" module, which is quite common pattern in Rails applications,
|
||||
using e.g. `ActiveSupport::Concern` as the instrumentation:
|
||||
|
||||
```ruby
|
||||
# In: app/models/concerns/searchable.rb
|
||||
#
|
||||
module Searchable
|
||||
extend ActiveSupport::Concern
|
||||
|
||||
included do
|
||||
include Elasticsearch::Model
|
||||
|
||||
mapping do
|
||||
# ...
|
||||
end
|
||||
|
||||
def self.search(query)
|
||||
# ...
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# In: app/models/article.rb
|
||||
#
|
||||
class Article
|
||||
include Searchable
|
||||
end
|
||||
```
|
||||
|
||||
#### The `__elasticsearch__` Proxy
|
||||
|
||||
The `Elasticsearch::Model` module contains a big amount of class and instance methods to provide
|
||||
all its functionality. To prevent polluting your model namespace, this functionality is primarily
|
||||
available via the `__elasticsearch__` class and instance level proxy methods;
|
||||
see the `Elasticsearch::Model::Proxy` class documentation for technical information.
|
||||
|
||||
The module will include important methods, such as `search`, into the class or module only
|
||||
when they haven't been defined already. Following two calls are thus functionally equivalent:
|
||||
|
||||
```ruby
|
||||
Article.__elasticsearch__.search 'fox'
|
||||
Article.search 'fox'
|
||||
```
|
||||
|
||||
See the `Elasticsearch::Model` module documentation for technical information.
|
||||
|
||||
### The Elasticsearch client
|
||||
|
||||
The module will set up a [client](https://github.com/elastic/elasticsearch-ruby/tree/master/elasticsearch),
|
||||
connected to `localhost:9200`, by default. You can access and use it as any other `Elasticsearch::Client`:
|
||||
|
||||
```ruby
|
||||
Article.__elasticsearch__.client.cluster.health
|
||||
# => { "cluster_name"=>"elasticsearch", "status"=>"yellow", ... }
|
||||
```
|
||||
|
||||
To use a client with different configuration, just set up a client for the model:
|
||||
|
||||
```ruby
|
||||
Article.__elasticsearch__.client = Elasticsearch::Client.new host: 'api.server.org'
|
||||
```
|
||||
|
||||
Or configure the client for all models:
|
||||
|
||||
```ruby
|
||||
Elasticsearch::Model.client = Elasticsearch::Client.new log: true
|
||||
```
|
||||
|
||||
You might want to do this during your application bootstrap process, e.g. in a Rails initializer.
|
||||
|
||||
Please refer to the
|
||||
[`elasticsearch-transport`](https://github.com/elastic/elasticsearch-ruby/tree/master/elasticsearch-transport)
|
||||
library documentation for all the configuration options, and to the
|
||||
[`elasticsearch-api`](http://rubydoc.info/gems/elasticsearch-api) library documentation
|
||||
for information about the Ruby client API.
|
||||
|
||||
### Importing the data
|
||||
|
||||
The first thing you'll want to do is importing your data into the index:
|
||||
|
||||
```ruby
|
||||
Article.import
|
||||
# => 0
|
||||
```
|
||||
|
||||
It's possible to import only records from a specific `scope` or `query`, transform the batch with the `transform`
|
||||
and `preprocess` options, or re-create the index by deleting it and creating it with correct mapping with the `force` option -- look for examples in the method documentation.
|
||||
|
||||
No errors were reported during importing, so... let's search the index!
|
||||
|
||||
|
||||
### Searching
|
||||
|
||||
For starters, we can try the "simple" type of search:
|
||||
|
||||
```ruby
|
||||
response = Article.search 'fox dogs'
|
||||
|
||||
response.took
|
||||
# => 3
|
||||
|
||||
response.results.total
|
||||
# => 2
|
||||
|
||||
response.results.first._score
|
||||
# => 0.02250402
|
||||
|
||||
response.results.first._source.title
|
||||
# => "Quick brown fox"
|
||||
```
|
||||
|
||||
#### Search results
|
||||
|
||||
The returned `response` object is a rich wrapper around the JSON returned from Elasticsearch,
|
||||
providing access to response metadata and the actual results ("hits").
|
||||
|
||||
Each "hit" is wrapped in the `Result` class, and provides method access
|
||||
to its properties via [`Hashie::Mash`](http://github.com/intridea/hashie).
|
||||
|
||||
The `results` object supports the `Enumerable` interface:
|
||||
|
||||
```ruby
|
||||
response.results.map { |r| r._source.title }
|
||||
# => ["Quick brown fox", "Fast black dogs"]
|
||||
|
||||
response.results.select { |r| r.title =~ /^Q/ }
|
||||
# => [#<Elasticsearch::Model::Response::Result:0x007 ... "_source"=>{"title"=>"Quick brown fox"}}>]
|
||||
```
|
||||
|
||||
In fact, the `response` object will delegate `Enumerable` methods to `results`:
|
||||
|
||||
```ruby
|
||||
response.any? { |r| r.title =~ /fox|dog/ }
|
||||
# => true
|
||||
```
|
||||
|
||||
To use `Array`'s methods (including any _ActiveSupport_ extensions), just call `to_a` on the object:
|
||||
|
||||
```ruby
|
||||
response.to_a.last.title
|
||||
# "Fast black dogs"
|
||||
```
|
||||
|
||||
#### Search results as database records
|
||||
|
||||
Instead of returning documents from Elasticsearch, the `records` method will return a collection
|
||||
of model instances, fetched from the primary database, ordered by score:
|
||||
|
||||
```ruby
|
||||
response.records.to_a
|
||||
# Article Load (0.3ms) SELECT "articles".* FROM "articles" WHERE "articles"."id" IN (1, 2)
|
||||
# => [#<Article id: 1, title: "Quick brown fox">, #<Article id: 2, title: "Fast black dogs">]
|
||||
```
|
||||
|
||||
The returned object is the genuine collection of model instances returned by your database,
|
||||
i.e. `ActiveRecord::Relation` for ActiveRecord, or `Mongoid::Criteria` in case of MongoDB.
|
||||
|
||||
This allows you to chain other methods on top of search results, as you would normally do:
|
||||
|
||||
```ruby
|
||||
response.records.where(title: 'Quick brown fox').to_a
|
||||
# Article Load (0.2ms) SELECT "articles".* FROM "articles" WHERE "articles"."id" IN (1, 2) AND "articles"."title" = 'Quick brown fox'
|
||||
# => [#<Article id: 1, title: "Quick brown fox">]
|
||||
|
||||
response.records.records.class
|
||||
# => ActiveRecord::Relation::ActiveRecord_Relation_Article
|
||||
```
|
||||
|
||||
The ordering of the records by score will be preserved, unless you explicitly specify a different
|
||||
order in your model query language:
|
||||
|
||||
```ruby
|
||||
response.records.order(:title).to_a
|
||||
# Article Load (0.2ms) SELECT "articles".* FROM "articles" WHERE "articles"."id" IN (1, 2) ORDER BY "articles".title ASC
|
||||
# => [#<Article id: 2, title: "Fast black dogs">, #<Article id: 1, title: "Quick brown fox">]
|
||||
```
|
||||
|
||||
The `records` method returns the real instances of your model, which is useful when you want to access your
|
||||
model methods -- at the expense of slowing down your application, of course.
|
||||
In most cases, working with `results` coming from Elasticsearch is sufficient, and much faster. See the
|
||||
[`elasticsearch-rails`](https://github.com/elastic/elasticsearch-rails/tree/master/elasticsearch-rails)
|
||||
library for more information about compatibility with the Ruby on Rails framework.
|
||||
|
||||
When you want to access both the database `records` and search `results`, use the `each_with_hit`
|
||||
(or `map_with_hit`) iterator:
|
||||
|
||||
```ruby
|
||||
response.records.each_with_hit { |record, hit| puts "* #{record.title}: #{hit._score}" }
|
||||
# * Quick brown fox: 0.02250402
|
||||
# * Fast black dogs: 0.02250402
|
||||
```
|
||||
|
||||
#### Searching multiple models
|
||||
|
||||
It is possible to search across multiple models with the module method:
|
||||
|
||||
```ruby
|
||||
Elasticsearch::Model.search('fox', [Article, Comment]).results.to_a.map(&:to_hash)
|
||||
# => [
|
||||
# {"_index"=>"articles", "_type"=>"article", "_id"=>"1", "_score"=>0.35136628, "_source"=>...},
|
||||
# {"_index"=>"comments", "_type"=>"comment", "_id"=>"1", "_score"=>0.35136628, "_source"=>...}
|
||||
# ]
|
||||
|
||||
Elasticsearch::Model.search('fox', [Article, Comment]).records.to_a
|
||||
# Article Load (0.3ms) SELECT "articles".* FROM "articles" WHERE "articles"."id" IN (1)
|
||||
# Comment Load (0.2ms) SELECT "comments".* FROM "comments" WHERE "comments"."id" IN (1,5)
|
||||
# => [#<Article id: 1, title: "Quick brown fox">, #<Comment id: 1, body: "Fox News">, ...]
|
||||
```
|
||||
|
||||
By default, all models which include the `Elasticsearch::Model` module are searched.
|
||||
|
||||
NOTE: It is _not_ possible to chain other methods on top of the `records` object, since it
|
||||
is a heterogenous collection, with models potentially backed by different databases.
|
||||
|
||||
#### Pagination
|
||||
|
||||
You can implement pagination with the `from` and `size` search parameters. However, search results
|
||||
can be automatically paginated with the [`kaminari`](http://rubygems.org/gems/kaminari) or
|
||||
[`will_paginate`](https://github.com/mislav/will_paginate) gems.
|
||||
(The pagination gems must be added before the Elasticsearch gems in your Gemfile,
|
||||
or loaded first in your application.)
|
||||
|
||||
If Kaminari or WillPaginate is loaded, use the familiar paging methods:
|
||||
|
||||
```ruby
|
||||
response.page(2).results
|
||||
response.page(2).records
|
||||
```
|
||||
|
||||
In a Rails controller, use the `params[:page]` parameter to paginate through results:
|
||||
|
||||
```ruby
|
||||
@articles = Article.search(params[:q]).page(params[:page]).records
|
||||
|
||||
@articles.current_page
|
||||
# => 2
|
||||
@articles.next_page
|
||||
# => 3
|
||||
```
|
||||
To initialize and include the Kaminari pagination support manually:
|
||||
|
||||
```ruby
|
||||
Kaminari::Hooks.init if defined?(Kaminari::Hooks)
|
||||
Elasticsearch::Model::Response::Response.__send__ :include, Elasticsearch::Model::Response::Pagination::Kaminari
|
||||
```
|
||||
|
||||
#### The Elasticsearch DSL
|
||||
|
||||
In most situations, you'll want to pass the search definition
|
||||
in the Elasticsearch [domain-specific language](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html) to the client:
|
||||
|
||||
```ruby
|
||||
response = Article.search query: { match: { title: "Fox Dogs" } },
|
||||
highlight: { fields: { title: {} } }
|
||||
|
||||
response.results.first.highlight.title
|
||||
# ["Quick brown <em>fox</em>"]
|
||||
```
|
||||
|
||||
You can pass any object which implements a `to_hash` method, which is called automatically,
|
||||
so you can use a custom class or your favourite JSON builder to build the search definition:
|
||||
|
||||
```ruby
|
||||
require 'jbuilder'
|
||||
|
||||
query = Jbuilder.encode do |json|
|
||||
json.query do
|
||||
json.match do
|
||||
json.title do
|
||||
json.query "fox dogs"
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
response = Article.search query
|
||||
response.results.first.title
|
||||
# => "Quick brown fox"
|
||||
```
|
||||
|
||||
Also, you can use the [**`elasticsearch-dsl`**](https://github.com/elastic/elasticsearch-ruby/tree/master/elasticsearch-dsl) library, which provides a specialized Ruby API for
|
||||
the Elasticsearch Query DSL:
|
||||
|
||||
```ruby
|
||||
require 'elasticsearch/dsl'
|
||||
|
||||
query = Elasticsearch::DSL::Search.search do
|
||||
query do
|
||||
match :title do
|
||||
query 'fox dogs'
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
response = Article.search query
|
||||
response.results.first.title
|
||||
# => "Quick brown fox"
|
||||
```
|
||||
|
||||
### Index Configuration
|
||||
|
||||
For proper search engine function, it's often necessary to configure the index properly.
|
||||
The `Elasticsearch::Model` integration provides class methods to set up index settings and mappings.
|
||||
|
||||
**NOTE**: Elasticsearch will automatically create an index when a document is indexed,
|
||||
with default settings and mappings. Create the index in advance with the `create_index!`
|
||||
method, so your index configuration is respected.
|
||||
|
||||
```ruby
|
||||
class Article
|
||||
settings index: { number_of_shards: 1 } do
|
||||
mappings dynamic: 'false' do
|
||||
indexes :title, analyzer: 'english', index_options: 'offsets'
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
Article.mappings.to_hash
|
||||
# => {
|
||||
# :article => {
|
||||
# :dynamic => "false",
|
||||
# :properties => {
|
||||
# :title => {
|
||||
# :type => "string",
|
||||
# :analyzer => "english",
|
||||
# :index_options => "offsets"
|
||||
# }
|
||||
# }
|
||||
# }
|
||||
# }
|
||||
|
||||
Article.settings.to_hash
|
||||
# { :index => { :number_of_shards => 1 } }
|
||||
```
|
||||
|
||||
You can use the defined settings and mappings to create an index with desired configuration:
|
||||
|
||||
```ruby
|
||||
Article.__elasticsearch__.client.indices.delete index: Article.index_name rescue nil
|
||||
Article.__elasticsearch__.client.indices.create \
|
||||
index: Article.index_name,
|
||||
body: { settings: Article.settings.to_hash, mappings: Article.mappings.to_hash }
|
||||
```
|
||||
|
||||
There's a shortcut available for this common operation (convenient e.g. in tests):
|
||||
|
||||
```ruby
|
||||
Article.__elasticsearch__.create_index! force: true
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
```
|
||||
|
||||
By default, index name and document type will be inferred from your class name,
|
||||
you can set it explicitly, however:
|
||||
|
||||
```ruby
|
||||
class Article
|
||||
index_name "articles-#{Rails.env}"
|
||||
document_type "post"
|
||||
end
|
||||
```
|
||||
|
||||
### Updating the Documents in the Index
|
||||
|
||||
Usually, we need to update the Elasticsearch index when records in the database are created, updated or deleted;
|
||||
use the `index_document`, `update_document` and `delete_document` methods, respectively:
|
||||
|
||||
```ruby
|
||||
Article.first.__elasticsearch__.index_document
|
||||
# => {"ok"=>true, ... "_version"=>2}
|
||||
```
|
||||
|
||||
#### Automatic Callbacks
|
||||
|
||||
You can automatically update the index whenever the record changes, by including
|
||||
the `Elasticsearch::Model::Callbacks` module in your model:
|
||||
|
||||
```ruby
|
||||
class Article
|
||||
include Elasticsearch::Model
|
||||
include Elasticsearch::Model::Callbacks
|
||||
end
|
||||
|
||||
Article.first.update_attribute :title, 'Updated!'
|
||||
|
||||
Article.search('*').map { |r| r.title }
|
||||
# => ["Updated!", "Lime green frogs", "Fast black dogs"]
|
||||
```
|
||||
|
||||
The automatic callback on record update keeps track of changes in your model
|
||||
(via [`ActiveModel::Dirty`](http://api.rubyonrails.org/classes/ActiveModel/Dirty.html)-compliant implementation),
|
||||
and performs a _partial update_ when this support is available.
|
||||
|
||||
The automatic callbacks are implemented in database adapters coming with `Elasticsearch::Model`. You can easily
|
||||
implement your own adapter: please see the relevant chapter below.
|
||||
|
||||
#### Custom Callbacks
|
||||
|
||||
In case you would need more control of the indexing process, you can implement these callbacks yourself,
|
||||
by hooking into `after_create`, `after_save`, `after_update` or `after_destroy` operations:
|
||||
|
||||
```ruby
|
||||
class Article
|
||||
include Elasticsearch::Model
|
||||
|
||||
after_save { logger.debug ["Updating document... ", index_document ].join }
|
||||
after_destroy { logger.debug ["Deleting document... ", delete_document].join }
|
||||
end
|
||||
```
|
||||
|
||||
For ActiveRecord-based models, use the `after_commit` callback to protect
|
||||
your data against inconsistencies caused by transaction rollbacks:
|
||||
|
||||
```ruby
|
||||
class Article < ActiveRecord::Base
|
||||
include Elasticsearch::Model
|
||||
|
||||
after_commit on: [:create] do
|
||||
__elasticsearch__.index_document if self.published?
|
||||
end
|
||||
|
||||
after_commit on: [:update] do
|
||||
__elasticsearch__.update_document if self.published?
|
||||
end
|
||||
|
||||
after_commit on: [:destroy] do
|
||||
__elasticsearch__.delete_document if self.published?
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
#### Asynchronous Callbacks
|
||||
|
||||
Of course, you're still performing an HTTP request during your database transaction, which is not optimal
|
||||
for large-scale applications. A better option would be to process the index operations in background,
|
||||
with a tool like [_Resque_](https://github.com/resque/resque) or [_Sidekiq_](https://github.com/mperham/sidekiq):
|
||||
|
||||
```ruby
|
||||
class Article
|
||||
include Elasticsearch::Model
|
||||
|
||||
after_save { Indexer.perform_async(:index, self.id) }
|
||||
after_destroy { Indexer.perform_async(:delete, self.id) }
|
||||
end
|
||||
```
|
||||
|
||||
An example implementation of the `Indexer` worker class could look like this:
|
||||
|
||||
```ruby
|
||||
class Indexer
|
||||
include Sidekiq::Worker
|
||||
sidekiq_options queue: 'elasticsearch', retry: false
|
||||
|
||||
Logger = Sidekiq.logger.level == Logger::DEBUG ? Sidekiq.logger : nil
|
||||
Client = Elasticsearch::Client.new host: 'localhost:9200', logger: Logger
|
||||
|
||||
def perform(operation, record_id)
|
||||
logger.debug [operation, "ID: #{record_id}"]
|
||||
|
||||
case operation.to_s
|
||||
when /index/
|
||||
record = Article.find(record_id)
|
||||
Client.index index: 'articles', type: 'article', id: record.id, body: record.__elasticsearch__.as_indexed_json
|
||||
when /delete/
|
||||
Client.delete index: 'articles', type: 'article', id: record_id
|
||||
else raise ArgumentError, "Unknown operation '#{operation}'"
|
||||
end
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
Start the _Sidekiq_ workers with `bundle exec sidekiq --queue elasticsearch --verbose` and
|
||||
update a model:
|
||||
|
||||
```ruby
|
||||
Article.first.update_attribute :title, 'Updated'
|
||||
```
|
||||
|
||||
You'll see the job being processed in the console where you started the _Sidekiq_ worker:
|
||||
|
||||
```
|
||||
Indexer JID-eb7e2daf389a1e5e83697128 DEBUG: ["index", "ID: 7"]
|
||||
Indexer JID-eb7e2daf389a1e5e83697128 INFO: PUT http://localhost:9200/articles/article/1 [status:200, request:0.004s, query:n/a]
|
||||
Indexer JID-eb7e2daf389a1e5e83697128 DEBUG: > {"id":1,"title":"Updated", ...}
|
||||
Indexer JID-eb7e2daf389a1e5e83697128 DEBUG: < {"ok":true,"_index":"articles","_type":"article","_id":"1","_version":6}
|
||||
Indexer JID-eb7e2daf389a1e5e83697128 INFO: done: 0.006 sec
|
||||
```
|
||||
|
||||
### Model Serialization
|
||||
|
||||
By default, the model instance will be serialized to JSON using the `as_indexed_json` method,
|
||||
which is defined automatically by the `Elasticsearch::Model::Serializing` module:
|
||||
|
||||
```ruby
|
||||
Article.first.__elasticsearch__.as_indexed_json
|
||||
# => {"id"=>1, "title"=>"Quick brown fox"}
|
||||
```
|
||||
|
||||
If you want to customize the serialization, just implement the `as_indexed_json` method yourself,
|
||||
for instance with the [`as_json`](http://api.rubyonrails.org/classes/ActiveModel/Serializers/JSON.html#method-i-as_json) method:
|
||||
|
||||
```ruby
|
||||
class Article
|
||||
include Elasticsearch::Model
|
||||
|
||||
def as_indexed_json(options={})
|
||||
as_json(only: 'title')
|
||||
end
|
||||
end
|
||||
|
||||
Article.first.as_indexed_json
|
||||
# => {"title"=>"Quick brown fox"}
|
||||
```
|
||||
|
||||
The re-defined method will be used in the indexing methods, such as `index_document`.
|
||||
|
||||
Please note that in Rails 3, you need to either set `include_root_in_json: false`, or prevent adding
|
||||
the "root" in the JSON representation with other means.
|
||||
|
||||
#### Relationships and Associations
|
||||
|
||||
When you have a more complicated structure/schema, you need to customize the `as_indexed_json` method -
|
||||
or perform the indexing separately, on your own.
|
||||
For example, let's have an `Article` model, which _has_many_ `Comment`s,
|
||||
`Author`s and `Categories`. We might want to define the serialization like this:
|
||||
|
||||
```ruby
|
||||
def as_indexed_json(options={})
|
||||
self.as_json(
|
||||
include: { categories: { only: :title},
|
||||
authors: { methods: [:full_name], only: [:full_name] },
|
||||
comments: { only: :text }
|
||||
})
|
||||
end
|
||||
|
||||
Article.first.as_indexed_json
|
||||
# => { "id" => 1,
|
||||
# "title" => "First Article",
|
||||
# "created_at" => 2013-12-03 13:39:02 UTC,
|
||||
# "updated_at" => 2013-12-03 13:39:02 UTC,
|
||||
# "categories" => [ { "title" => "One" } ],
|
||||
# "authors" => [ { "full_name" => "John Smith" } ],
|
||||
# "comments" => [ { "text" => "First comment" } ] }
|
||||
```
|
||||
|
||||
Of course, when you want to use the automatic indexing callbacks, you need to hook into the appropriate
|
||||
_ActiveRecord_ callbacks -- please see the full example in `examples/activerecord_associations.rb`.
|
||||
|
||||
### Other ActiveModel Frameworks
|
||||
|
||||
The `Elasticsearch::Model` module is fully compatible with any ActiveModel-compatible model, such as _Mongoid_:
|
||||
|
||||
```ruby
|
||||
require 'mongoid'
|
||||
|
||||
Mongoid.connect_to 'articles'
|
||||
|
||||
class Article
|
||||
include Mongoid::Document
|
||||
|
||||
field :id, type: String
|
||||
field :title, type: String
|
||||
|
||||
attr_accessible :id, :title, :published_at
|
||||
|
||||
include Elasticsearch::Model
|
||||
|
||||
def as_indexed_json(options={})
|
||||
as_json(except: [:id, :_id])
|
||||
end
|
||||
end
|
||||
|
||||
Article.create id: '1', title: 'Quick brown fox'
|
||||
Article.import
|
||||
|
||||
response = Article.search 'fox';
|
||||
response.records.to_a
|
||||
# MOPED: 127.0.0.1:27017 QUERY database=articles collection=articles selector={"_id"=>{"$in"=>["1"]}} ...
|
||||
# => [#<Article _id: 1, id: nil, title: "Quick brown fox", published_at: nil>]
|
||||
```
|
||||
|
||||
Full examples for CouchBase, DataMapper, Mongoid, Ohm and Riak models can be found in the `examples` folder.
|
||||
|
||||
### Adapters
|
||||
|
||||
To support various "OxM" (object-relational- or object-document-mapper) implementations and frameworks,
|
||||
the `Elasticsearch::Model` integration supports an "adapter" concept.
|
||||
|
||||
An adapter provides implementations for common behaviour, such as fetching records from the database,
|
||||
hooking into model callbacks for automatic index updates, or efficient bulk loading from the database.
|
||||
The integration comes with adapters for _ActiveRecord_ and _Mongoid_ out of the box.
|
||||
|
||||
Writing an adapter for your favourite framework is straightforward -- let's see
|
||||
a simplified adapter for [_DataMapper_](http://datamapper.org):
|
||||
|
||||
```ruby
|
||||
module DataMapperAdapter
|
||||
|
||||
# Implement the interface for fetching records
|
||||
#
|
||||
module Records
|
||||
def records
|
||||
klass.all(id: ids)
|
||||
end
|
||||
|
||||
# ...
|
||||
end
|
||||
end
|
||||
|
||||
# Register the adapter
|
||||
#
|
||||
Elasticsearch::Model::Adapter.register(
|
||||
DataMapperAdapter,
|
||||
lambda { |klass| defined?(::DataMapper::Resource) and klass.ancestors.include?(::DataMapper::Resource) }
|
||||
)
|
||||
```
|
||||
|
||||
Require the adapter and include `Elasticsearch::Model` in the class:
|
||||
|
||||
```ruby
|
||||
require 'datamapper_adapter'
|
||||
|
||||
class Article
|
||||
include DataMapper::Resource
|
||||
include Elasticsearch::Model
|
||||
|
||||
property :id, Serial
|
||||
property :title, String
|
||||
end
|
||||
```
|
||||
|
||||
When accessing the `records` method of the response, for example,
|
||||
the implementation from our adapter will be used now:
|
||||
|
||||
```ruby
|
||||
response = Article.search 'foo'
|
||||
|
||||
response.records.to_a
|
||||
# ~ (0.000057) SELECT "id", "title", "published_at" FROM "articles" WHERE "id" IN (3, 1) ORDER BY "id"
|
||||
# => [#<Article @id=1 @title="Foo" @published_at=nil>, #<Article @id=3 @title="Foo Foo" @published_at=nil>]
|
||||
|
||||
response.records.records.class
|
||||
# => DataMapper::Collection
|
||||
```
|
||||
|
||||
More examples can be found in the `examples` folder. Please see the `Elasticsearch::Model::Adapter`
|
||||
module and its submodules for technical information.
|
||||
|
||||
### Settings
|
||||
|
||||
The module provides a common `settings` method to customize various features.
|
||||
|
||||
Before version 7.0.0 of the gem, the only supported setting was `:inheritance_enabled`. This setting has been deprecated
|
||||
and removed.
|
||||
|
||||
## Development and Community
|
||||
|
||||
For local development, clone the repository and run `bundle install`. See `rake -T` for a list of
|
||||
available Rake tasks for running tests, generating documentation, starting a testing cluster, etc.
|
||||
|
||||
Bug fixes and features must be covered by unit tests.
|
||||
|
||||
Github's pull requests and issues are used to communicate, send bug reports and code contributions.
|
||||
|
||||
To run all tests against a test Elasticsearch cluster, use a command like this:
|
||||
|
||||
```bash
|
||||
curl -# https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.0.0.RC1.tar.gz | tar xz -C tmp/
|
||||
SERVER=start TEST_CLUSTER_COMMAND=$PWD/tmp/elasticsearch-1.0.0.RC1/bin/elasticsearch bundle exec rake test:all
|
||||
```
|
||||
|
||||
### Single Table Inheritance deprecation
|
||||
|
||||
`Single Table Inheritance` has been supported through the 6.x series of this gem. With this feature,
|
||||
elasticsearch settings (index mappings, etc) on a parent model could be inherited by a child model leading to different
|
||||
model documents being indexed into the same Elasticsearch index. This feature depended on the ability to set a `type`
|
||||
for a document in Elasticsearch. The Elasticsearch team has deprecated support for `types`, as is described
|
||||
[here.](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html)
|
||||
This gem will also remove support for types and `Single Table Inheritance` in version 7.0 as it enables an anti-pattern.
|
||||
Please save different model documents in separate indices. If you want to use STI, you can include an artificial
|
||||
`type` field manually in each document and use it in other operations.
|
||||
|
||||
## License
|
||||
|
||||
This software is licensed under the Apache 2 license, quoted below.
|
||||
|
||||
Copyright (c) 2014 Elasticsearch <http://www.elasticsearch.org>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
|
@ -1,61 +0,0 @@
|
|||
require "bundler/gem_tasks"
|
||||
|
||||
desc "Run unit tests"
|
||||
task :default => 'test:unit'
|
||||
task :test => 'test:unit'
|
||||
|
||||
if RUBY_VERSION < '2.3'
|
||||
GEMFILES = ['3.0.gemfile', '4.0.gemfile', '5.0.gemfile']
|
||||
else
|
||||
GEMFILES = ['4.0.gemfile', '5.0.gemfile']
|
||||
end
|
||||
|
||||
namespace :bundle do
|
||||
desc 'Install dependencies for all the Gemfiles in /gemfiles. Optionally define env variable RAILS_VERSIONS. E.g. RAILS_VERSIONS=3.0,5.0'
|
||||
task :install do
|
||||
unless defined?(JRUBY_VERSION)
|
||||
puts '-'*80
|
||||
gemfiles = ENV['RAILS_VERSIONS'] ? ENV['RAILS_VERSIONS'].split(',').map { |v| "#{v}.gemfile"} : GEMFILES
|
||||
gemfiles.each do |gemfile|
|
||||
Bundler.with_clean_env do
|
||||
sh "bundle install --gemfile #{File.expand_path('../gemfiles/'+gemfile, __FILE__)}"
|
||||
end
|
||||
puts '-'*80
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# ----- Test tasks ------------------------------------------------------------
|
||||
|
||||
require 'rake/testtask'
|
||||
namespace :test do
|
||||
|
||||
desc 'Run all tests. Optionally define env variable RAILS_VERSIONS. E.g. RAILS_VERSIONS=3.0,5.0'
|
||||
task :all, [:rails_versions] do |task, args|
|
||||
gemfiles = ENV['RAILS_VERSIONS'] ? ENV['RAILS_VERSIONS'].split(',').map {|v| "#{v}.gemfile"} : GEMFILES
|
||||
puts '-' * 80
|
||||
gemfiles.each do |gemfile|
|
||||
sh "BUNDLE_GEMFILE='#{File.expand_path("../gemfiles/#{gemfile}", __FILE__)}' " +
|
||||
" bundle exec rspec"
|
||||
puts '-' * 80
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# ----- Documentation tasks ---------------------------------------------------
|
||||
|
||||
require 'yard'
|
||||
YARD::Rake::YardocTask.new(:doc) do |t|
|
||||
t.options = %w| --embed-mixins --markup=markdown |
|
||||
end
|
||||
|
||||
# ----- Code analysis tasks ---------------------------------------------------
|
||||
|
||||
if defined?(RUBY_VERSION) && RUBY_VERSION > '1.9'
|
||||
require 'cane/rake_task'
|
||||
Cane::RakeTask.new(:quality) do |cane|
|
||||
cane.abc_max = 15
|
||||
cane.no_style = true
|
||||
end
|
||||
end
|
|
@ -1,54 +0,0 @@
|
|||
# coding: utf-8
|
||||
lib = File.expand_path('../lib', __FILE__)
|
||||
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
||||
require 'elasticsearch/model/version'
|
||||
|
||||
Gem::Specification.new do |s|
|
||||
s.name = "elasticsearch-model"
|
||||
s.version = Elasticsearch::Model::VERSION
|
||||
s.authors = ["Karel Minarik"]
|
||||
s.email = ["karel.minarik@elasticsearch.org"]
|
||||
s.description = "ActiveModel/Record integrations for Elasticsearch."
|
||||
s.summary = "ActiveModel/Record integrations for Elasticsearch."
|
||||
s.homepage = "https://github.com/elasticsearch/elasticsearch-rails/"
|
||||
s.license = "Apache 2"
|
||||
|
||||
s.files = `git ls-files`.split($/)
|
||||
s.executables = s.files.grep(%r{^bin/}) { |f| File.basename(f) }
|
||||
s.test_files = s.files.grep(%r{^(test|spec|features)/})
|
||||
s.require_paths = ["lib"]
|
||||
|
||||
s.extra_rdoc_files = [ "README.md", "LICENSE.txt" ]
|
||||
s.rdoc_options = [ "--charset=UTF-8" ]
|
||||
|
||||
s.required_ruby_version = ">= 1.9.3"
|
||||
|
||||
s.add_dependency "elasticsearch", '> 1'
|
||||
s.add_dependency "activesupport", '> 3'
|
||||
s.add_dependency "hashie"
|
||||
|
||||
s.add_development_dependency "bundler"
|
||||
s.add_development_dependency "rake", "~> 11.1"
|
||||
|
||||
s.add_development_dependency "elasticsearch-extensions"
|
||||
|
||||
s.add_development_dependency "sqlite3" unless defined?(JRUBY_VERSION)
|
||||
s.add_development_dependency "activemodel", "> 3"
|
||||
|
||||
s.add_development_dependency "oj" unless defined?(JRUBY_VERSION)
|
||||
s.add_development_dependency "kaminari"
|
||||
s.add_development_dependency "will_paginate"
|
||||
|
||||
s.add_development_dependency "minitest"
|
||||
s.add_development_dependency "test-unit"
|
||||
s.add_development_dependency "shoulda-context"
|
||||
s.add_development_dependency "mocha"
|
||||
s.add_development_dependency "turn"
|
||||
s.add_development_dependency "yard"
|
||||
s.add_development_dependency "ruby-prof" unless defined?(JRUBY_VERSION)
|
||||
s.add_development_dependency "pry"
|
||||
|
||||
s.add_development_dependency "simplecov"
|
||||
s.add_development_dependency "cane"
|
||||
s.add_development_dependency "require-prof"
|
||||
end
|
|
@ -1,77 +0,0 @@
|
|||
# ActiveRecord and Elasticsearch
|
||||
# ==============================
|
||||
#
|
||||
# https://github.com/rails/rails/tree/master/activerecord
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'pry'
|
||||
Pry.config.history.file = File.expand_path('../../tmp/elasticsearch_development.pry', __FILE__)
|
||||
|
||||
require 'logger'
|
||||
require 'ansi/core'
|
||||
require 'active_record'
|
||||
require 'kaminari'
|
||||
|
||||
require 'elasticsearch/model'
|
||||
|
||||
ActiveRecord::Base.logger = ActiveSupport::Logger.new(STDOUT)
|
||||
ActiveRecord::Base.establish_connection( adapter: 'sqlite3', database: ":memory:" )
|
||||
|
||||
ActiveRecord::Schema.define(version: 1) do
|
||||
create_table :articles do |t|
|
||||
t.string :title
|
||||
t.date :published_at
|
||||
t.timestamps
|
||||
end
|
||||
end
|
||||
|
||||
Kaminari::Hooks.init if defined?(Kaminari::Hooks) if defined?(Kaminari::Hooks)
|
||||
|
||||
class Article < ActiveRecord::Base
|
||||
end
|
||||
|
||||
# Store data
|
||||
#
|
||||
Article.delete_all
|
||||
Article.create title: 'Foo'
|
||||
Article.create title: 'Bar'
|
||||
Article.create title: 'Foo Foo'
|
||||
|
||||
# Index data
|
||||
#
|
||||
client = Elasticsearch::Client.new log:true
|
||||
|
||||
# client.indices.delete index: 'articles' rescue nil
|
||||
# client.indices.create index: 'articles', body: { mappings: { article: { dynamic: 'strict' }, properties: {} } }
|
||||
|
||||
client.indices.delete index: 'articles' rescue nil
|
||||
client.bulk index: 'articles',
|
||||
type: 'article',
|
||||
body: Article.all.as_json.map { |a| { index: { _id: a.delete('id'), data: a } } },
|
||||
refresh: true
|
||||
|
||||
# Extend the model with Elasticsearch support
|
||||
#
|
||||
Article.__send__ :include, Elasticsearch::Model
|
||||
# Article.__send__ :include, Elasticsearch::Model::Callbacks
|
||||
|
||||
# ActiveRecord::Base.logger.silence do
|
||||
# 10_000.times do |i|
|
||||
# Article.create title: "Foo #{i}"
|
||||
# end
|
||||
# end
|
||||
|
||||
puts '', '-'*Pry::Terminal.width!
|
||||
|
||||
Elasticsearch::Model.client = Elasticsearch::Client.new log: true
|
||||
|
||||
response = Article.search 'foo';
|
||||
|
||||
p response.size
|
||||
p response.results.size
|
||||
p response.records.size
|
||||
|
||||
Pry.start(binding, prompt: lambda { |obj, nest_level, _| '> ' },
|
||||
input: StringIO.new('response.records.to_a'),
|
||||
quiet: true)
|
|
@ -1,213 +0,0 @@
|
|||
# ActiveRecord associations and Elasticsearch
|
||||
# ===========================================
|
||||
#
|
||||
# https://github.com/rails/rails/tree/master/activerecord
|
||||
# http://guides.rubyonrails.org/association_basics.html
|
||||
#
|
||||
# Run me with:
|
||||
#
|
||||
# ruby -I lib examples/activerecord_associations.rb
|
||||
#
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'pry'
|
||||
|
||||
require 'logger'
|
||||
require 'ansi/core'
|
||||
require 'active_record'
|
||||
|
||||
require 'json'
|
||||
require 'elasticsearch/model'
|
||||
|
||||
ActiveRecord::Base.logger = ActiveSupport::Logger.new(STDOUT)
|
||||
ActiveRecord::Base.establish_connection( adapter: 'sqlite3', database: ":memory:" )
|
||||
|
||||
# ----- Schema definition -------------------------------------------------------------------------
|
||||
|
||||
ActiveRecord::Schema.define(version: 1) do
|
||||
create_table :categories do |t|
|
||||
t.string :title
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
create_table :authors do |t|
|
||||
t.string :first_name, :last_name
|
||||
t.string :department
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
create_table :authorships do |t|
|
||||
t.references :article
|
||||
t.references :author
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
create_table :articles do |t|
|
||||
t.string :title
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
create_table :articles_categories, id: false do |t|
|
||||
t.references :article, :category
|
||||
end
|
||||
|
||||
create_table :comments do |t|
|
||||
t.string :text
|
||||
t.references :article
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
add_index(:comments, :article_id) unless index_exists?(:comments, :article_id)
|
||||
end
|
||||
|
||||
# ----- Elasticsearch client setup ----------------------------------------------------------------
|
||||
|
||||
Elasticsearch::Model.client = Elasticsearch::Client.new log: true
|
||||
Elasticsearch::Model.client.transport.logger.formatter = proc { |s, d, p, m| "\e[2m#{m}\n\e[0m" }
|
||||
|
||||
# ----- Search integration ------------------------------------------------------------------------
|
||||
|
||||
module Searchable
|
||||
extend ActiveSupport::Concern
|
||||
|
||||
included do
|
||||
include Elasticsearch::Model
|
||||
include Elasticsearch::Model::Callbacks
|
||||
|
||||
include Indexing
|
||||
after_touch() { __elasticsearch__.index_document }
|
||||
end
|
||||
|
||||
module Indexing
|
||||
|
||||
# Customize the JSON serialization for Elasticsearch
|
||||
def as_indexed_json(options={})
|
||||
self.as_json(
|
||||
include: { categories: { only: :title},
|
||||
authors: { methods: [:full_name, :department], only: [:full_name, :department] },
|
||||
comments: { only: :text }
|
||||
})
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# ----- Model definitions -------------------------------------------------------------------------
|
||||
|
||||
class Category < ActiveRecord::Base
|
||||
include Elasticsearch::Model
|
||||
include Elasticsearch::Model::Callbacks
|
||||
|
||||
has_and_belongs_to_many :articles
|
||||
end
|
||||
|
||||
class Author < ActiveRecord::Base
|
||||
has_many :authorships
|
||||
|
||||
after_update { self.authorships.each(&:touch) }
|
||||
|
||||
def full_name
|
||||
[first_name, last_name].compact.join(' ')
|
||||
end
|
||||
end
|
||||
|
||||
class Authorship < ActiveRecord::Base
|
||||
belongs_to :author
|
||||
belongs_to :article, touch: true
|
||||
end
|
||||
|
||||
class Article < ActiveRecord::Base
|
||||
include Searchable
|
||||
|
||||
has_and_belongs_to_many :categories, after_add: [ lambda { |a,c| a.__elasticsearch__.index_document } ],
|
||||
after_remove: [ lambda { |a,c| a.__elasticsearch__.index_document } ]
|
||||
has_many :authorships
|
||||
has_many :authors, through: :authorships
|
||||
has_many :comments
|
||||
end
|
||||
|
||||
class Comment < ActiveRecord::Base
|
||||
include Elasticsearch::Model
|
||||
include Elasticsearch::Model::Callbacks
|
||||
|
||||
belongs_to :article, touch: true
|
||||
end
|
||||
|
||||
# ----- Insert data -------------------------------------------------------------------------------
|
||||
|
||||
# Create category
|
||||
#
|
||||
category = Category.create title: 'One'
|
||||
|
||||
# Create author
|
||||
#
|
||||
author = Author.create first_name: 'John', last_name: 'Smith', department: 'Business'
|
||||
|
||||
# Create article
|
||||
|
||||
article = Article.create title: 'First Article'
|
||||
|
||||
# Assign category
|
||||
#
|
||||
article.categories << category
|
||||
|
||||
# Assign author
|
||||
#
|
||||
article.authors << author
|
||||
|
||||
# Add comment
|
||||
#
|
||||
article.comments.create text: 'First comment for article One'
|
||||
article.comments.create text: 'Second comment for article One'
|
||||
|
||||
Elasticsearch::Model.client.indices.refresh index: Elasticsearch::Model::Registry.all.map(&:index_name)
|
||||
|
||||
# Search for a term and return records
|
||||
#
|
||||
puts "",
|
||||
"Articles containing 'one':".ansi(:bold),
|
||||
Article.search('one').records.to_a.map(&:inspect),
|
||||
""
|
||||
|
||||
puts "",
|
||||
"All Models containing 'one':".ansi(:bold),
|
||||
Elasticsearch::Model.search('one').records.to_a.map(&:inspect),
|
||||
""
|
||||
|
||||
# Difference between `records` and `results`
|
||||
#
|
||||
response = Article.search query: { match: { title: 'first' } }
|
||||
|
||||
puts "",
|
||||
"Search results are wrapped in the <#{response.class}> class",
|
||||
""
|
||||
|
||||
puts "",
|
||||
"Access the <ActiveRecord> instances with the `#records` method:".ansi(:bold),
|
||||
response.records.map { |r| "* #{r.title} | Authors: #{r.authors.map(&:full_name) } | Comment count: #{r.comments.size}" }.join("\n"),
|
||||
""
|
||||
|
||||
puts "",
|
||||
"Access the Elasticsearch documents with the `#results` method (without touching the database):".ansi(:bold),
|
||||
response.results.map { |r| "* #{r.title} | Authors: #{r.authors.map(&:full_name) } | Comment count: #{r.comments.size}" }.join("\n"),
|
||||
""
|
||||
|
||||
puts "",
|
||||
"The whole indexed document (according to `Article#as_indexed_json`):".ansi(:bold),
|
||||
JSON.pretty_generate(response.results.first._source.to_hash),
|
||||
""
|
||||
|
||||
# Retrieve only selected fields from Elasticsearch
|
||||
#
|
||||
response = Article.search query: { match: { title: 'first' } }, _source: ['title', 'authors.full_name']
|
||||
|
||||
puts "",
|
||||
"Retrieve only selected fields from Elasticsearch:".ansi(:bold),
|
||||
JSON.pretty_generate(response.results.first._source.to_hash),
|
||||
""
|
||||
|
||||
# ----- Pry ---------------------------------------------------------------------------------------
|
||||
|
||||
Pry.start(binding, prompt: lambda { |obj, nest_level, _| '> ' },
|
||||
input: StringIO.new('response.records.first'),
|
||||
quiet: true)
|
|
@ -1,135 +0,0 @@
|
|||
# Custom Analyzer for ActiveRecord integration with Elasticsearch
|
||||
# ===============================================================
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'ansi'
|
||||
require 'logger'
|
||||
|
||||
require 'active_record'
|
||||
require 'elasticsearch/model'
|
||||
|
||||
ActiveRecord::Base.logger = ActiveSupport::Logger.new(STDOUT)
|
||||
ActiveRecord::Base.establish_connection( adapter: 'sqlite3', database: ":memory:" )
|
||||
|
||||
ActiveRecord::Schema.define(version: 1) do
|
||||
create_table :articles do |t|
|
||||
t.string :title
|
||||
t.date :published_at
|
||||
t.timestamps
|
||||
end
|
||||
end
|
||||
|
||||
Elasticsearch::Model.client.transport.logger = ActiveSupport::Logger.new(STDOUT)
|
||||
Elasticsearch::Model.client.transport.logger.formatter = lambda { |s, d, p, m| "#{m.ansi(:faint)}\n" }
|
||||
|
||||
class Article < ActiveRecord::Base
|
||||
include Elasticsearch::Model
|
||||
|
||||
settings index: {
|
||||
number_of_shards: 1,
|
||||
number_of_replicas: 0,
|
||||
analysis: {
|
||||
analyzer: {
|
||||
pattern: {
|
||||
type: 'pattern',
|
||||
pattern: "\\s|_|-|\\.",
|
||||
lowercase: true
|
||||
},
|
||||
trigram: {
|
||||
tokenizer: 'trigram'
|
||||
}
|
||||
},
|
||||
tokenizer: {
|
||||
trigram: {
|
||||
type: 'ngram',
|
||||
min_gram: 3,
|
||||
max_gram: 3,
|
||||
token_chars: ['letter', 'digit']
|
||||
}
|
||||
}
|
||||
} } do
|
||||
mapping do
|
||||
indexes :title, type: 'text', analyzer: 'english' do
|
||||
indexes :keyword, analyzer: 'keyword'
|
||||
indexes :pattern, analyzer: 'pattern'
|
||||
indexes :trigram, analyzer: 'trigram'
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Create example records
|
||||
#
|
||||
Article.delete_all
|
||||
Article.create title: 'Foo'
|
||||
Article.create title: 'Foo-Bar'
|
||||
Article.create title: 'Foo_Bar_Bazooka'
|
||||
Article.create title: 'Foo.Bar'
|
||||
|
||||
# Index records
|
||||
#
|
||||
errors = Article.import force: true, refresh: true, return: 'errors'
|
||||
puts "[!] Errors importing records: #{errors.map { |d| d['index']['error'] }.join(', ')}".ansi(:red) && exit(1) unless errors.empty?
|
||||
|
||||
puts '', '-'*80
|
||||
|
||||
puts "English analyzer [Foo_Bar_1_Bazooka]".ansi(:bold),
|
||||
"Tokens: " +
|
||||
Article.__elasticsearch__.client.indices
|
||||
.analyze(index: Article.index_name, body: { field: 'title', text: 'Foo_Bar_1_Bazooka' })['tokens']
|
||||
.map { |d| "[#{d['token']}]" }.join(' '),
|
||||
"\n"
|
||||
|
||||
puts "Keyword analyzer [Foo_Bar_1_Bazooka]".ansi(:bold),
|
||||
"Tokens: " +
|
||||
Article.__elasticsearch__.client.indices
|
||||
.analyze(index: Article.index_name, body: { field: 'title.keyword', text: 'Foo_Bar_1_Bazooka' })['tokens']
|
||||
.map { |d| "[#{d['token']}]" }.join(' '),
|
||||
"\n"
|
||||
|
||||
puts "Pattern analyzer [Foo_Bar_1_Bazooka]".ansi(:bold),
|
||||
"Tokens: " +
|
||||
Article.__elasticsearch__.client.indices
|
||||
.analyze(index: Article.index_name, body: { field: 'title.pattern', text: 'Foo_Bar_1_Bazooka' })['tokens']
|
||||
.map { |d| "[#{d['token']}]" }.join(' '),
|
||||
"\n"
|
||||
|
||||
puts "Trigram analyzer [Foo_Bar_1_Bazooka]".ansi(:bold),
|
||||
"Tokens: " +
|
||||
Article.__elasticsearch__.client.indices
|
||||
.analyze(index: Article.index_name, body: { field: 'title.trigram', text: 'Foo_Bar_1_Bazooka' })['tokens']
|
||||
.map { |d| "[#{d['token']}]" }.join(' '),
|
||||
"\n"
|
||||
|
||||
puts '', '-'*80
|
||||
|
||||
response = Article.search query: { match: { 'title' => 'foo' } } ;
|
||||
|
||||
puts "English search for 'foo'".ansi(:bold),
|
||||
"#{response.response.hits.total} matches: " +
|
||||
response.records.map { |d| d.title }.join(', '),
|
||||
"\n"
|
||||
|
||||
puts '', '-'*80
|
||||
|
||||
response = Article.search query: { match: { 'title.pattern' => 'foo' } } ;
|
||||
|
||||
puts "Pattern search for 'foo'".ansi(:bold),
|
||||
"#{response.response.hits.total} matches: " +
|
||||
response.records.map { |d| d.title }.join(', '),
|
||||
"\n"
|
||||
|
||||
puts '', '-'*80
|
||||
|
||||
response = Article.search query: { match: { 'title.trigram' => 'zoo' } } ;
|
||||
|
||||
puts "Trigram search for 'zoo'".ansi(:bold),
|
||||
"#{response.response.hits.total} matches: " +
|
||||
response.records.map { |d| d.title }.join(', '),
|
||||
"\n"
|
||||
|
||||
puts '', '-'*80
|
||||
|
||||
|
||||
require 'pry'; binding.pry;
|
|
@ -1,69 +0,0 @@
|
|||
require 'ansi'
|
||||
require 'active_record'
|
||||
require 'elasticsearch/model'
|
||||
|
||||
ActiveRecord::Base.logger = ActiveSupport::Logger.new(STDOUT)
|
||||
ActiveRecord::Base.establish_connection( adapter: 'sqlite3', database: ":memory:" )
|
||||
|
||||
ActiveRecord::Schema.define(version: 1) do
|
||||
create_table :articles do |t|
|
||||
t.string :title
|
||||
t.date :published_at
|
||||
t.timestamps
|
||||
end
|
||||
end
|
||||
|
||||
class Article < ActiveRecord::Base
|
||||
include Elasticsearch::Model
|
||||
include Elasticsearch::Model::Callbacks
|
||||
|
||||
mapping do
|
||||
indexes :title, type: 'text' do
|
||||
indexes :suggest, type: 'completion'
|
||||
end
|
||||
indexes :url, type: 'keyword'
|
||||
end
|
||||
|
||||
def as_indexed_json(options={})
|
||||
as_json.merge 'url' => "/articles/#{id}"
|
||||
end
|
||||
end
|
||||
|
||||
Article.__elasticsearch__.client = Elasticsearch::Client.new log: true
|
||||
|
||||
# Create index
|
||||
|
||||
Article.__elasticsearch__.create_index! force: true
|
||||
|
||||
# Store data
|
||||
|
||||
Article.delete_all
|
||||
Article.create title: 'Foo'
|
||||
Article.create title: 'Bar'
|
||||
Article.create title: 'Foo Foo'
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
|
||||
# Search and suggest
|
||||
|
||||
response_1 = Article.search 'foo';
|
||||
|
||||
puts "Article search:".ansi(:bold),
|
||||
response_1.to_a.map { |d| "Title: #{d.title}" }.inspect.ansi(:bold, :yellow)
|
||||
|
||||
response_2 = Article.search \
|
||||
query: {
|
||||
match: { title: 'foo' }
|
||||
},
|
||||
suggest: {
|
||||
articles: {
|
||||
text: 'foo',
|
||||
completion: { field: 'title.suggest' }
|
||||
}
|
||||
},
|
||||
_source: ['title', 'url']
|
||||
|
||||
puts "Article search with suggest:".ansi(:bold),
|
||||
response_2.response['suggest']['articles'].first['options'].map { |d| "#{d['text']} -> #{d['_source']['url']}" }.
|
||||
inspect.ansi(:bold, :blue)
|
||||
|
||||
require 'pry'; binding.pry;
|
|
@ -1,101 +0,0 @@
|
|||
require 'ansi'
|
||||
require 'sqlite3'
|
||||
require 'active_record'
|
||||
require 'elasticsearch/model'
|
||||
|
||||
ActiveRecord::Base.logger = ActiveSupport::Logger.new(STDOUT)
|
||||
ActiveRecord::Base.establish_connection( adapter: 'sqlite3', database: ":memory:" )
|
||||
|
||||
ActiveRecord::Schema.define(version: 1) do
|
||||
create_table :articles do |t|
|
||||
t.string :title
|
||||
t.date :published_at
|
||||
t.timestamps
|
||||
end
|
||||
end
|
||||
|
||||
class Article < ActiveRecord::Base
|
||||
include Elasticsearch::Model
|
||||
include Elasticsearch::Model::Callbacks
|
||||
|
||||
article_es_settings = {
|
||||
index: {
|
||||
analysis: {
|
||||
filter: {
|
||||
autocomplete_filter: {
|
||||
type: "edge_ngram",
|
||||
min_gram: 1,
|
||||
max_gram: 20
|
||||
}
|
||||
},
|
||||
analyzer:{
|
||||
autocomplete: {
|
||||
type: "custom",
|
||||
tokenizer: "standard",
|
||||
filter: ["lowercase", "autocomplete_filter"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
settings article_es_settings do
|
||||
mapping do
|
||||
indexes :title
|
||||
indexes :suggestable_title, type: 'string', analyzer: 'autocomplete'
|
||||
end
|
||||
end
|
||||
|
||||
def as_indexed_json(options={})
|
||||
as_json.merge(suggestable_title: title)
|
||||
end
|
||||
end
|
||||
|
||||
Article.__elasticsearch__.client = Elasticsearch::Client.new log: true
|
||||
|
||||
# Create index
|
||||
|
||||
Article.__elasticsearch__.create_index! force: true
|
||||
|
||||
# Store data
|
||||
|
||||
Article.delete_all
|
||||
Article.create title: 'Foo'
|
||||
Article.create title: 'Bar'
|
||||
Article.create title: 'Foo Foo'
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
|
||||
# Search and suggest
|
||||
fulltext_search_response = Article.search(query: { match: { title: 'foo'} } )
|
||||
|
||||
puts "", "Article search for 'foo':".ansi(:bold),
|
||||
fulltext_search_response.to_a.map { |d| "Title: #{d.title}" }.inspect.ansi(:bold, :yellow),
|
||||
""
|
||||
|
||||
fulltext_search_response_2 = Article.search(query: { match: { title: 'fo'} } )
|
||||
|
||||
puts "", "Article search for 'fo':".ansi(:bold),
|
||||
fulltext_search_response_2.to_a.map { |d| "Title: #{d.title}" }.inspect.ansi(:bold, :red),
|
||||
""
|
||||
|
||||
autocomplete_search_response = Article.search(query: { match: { suggestable_title: { query: 'fo', analyzer: 'standard'} } } )
|
||||
|
||||
puts "", "Article autocomplete for 'fo':".ansi(:bold),
|
||||
autocomplete_search_response.to_a.map { |d| "Title: #{d.suggestable_title}" }.inspect.ansi(:bold, :green),
|
||||
""
|
||||
|
||||
puts "", "Text 'Foo Bar' analyzed with the default analyzer:".ansi(:bold),
|
||||
Article.__elasticsearch__.client.indices.analyze(
|
||||
index: Article.__elasticsearch__.index_name,
|
||||
field: 'title',
|
||||
text: 'Foo Bar')['tokens'].map { |t| t['token'] }.inspect.ansi(:bold, :yellow),
|
||||
""
|
||||
|
||||
puts "", "Text 'Foo Bar' analyzed with the autocomplete filter:".ansi(:bold),
|
||||
Article.__elasticsearch__.client.indices.analyze(
|
||||
index: Article.__elasticsearch__.index_name,
|
||||
field: 'suggestable_title',
|
||||
text: 'Foo Bar')['tokens'].map { |t| t['token'] }.inspect.ansi(:bold, :yellow),
|
||||
""
|
||||
|
||||
require 'pry'; binding.pry;
|
|
@ -1,66 +0,0 @@
|
|||
# Couchbase and Elasticsearch
|
||||
# ===========================
|
||||
#
|
||||
# https://github.com/couchbase/couchbase-ruby-model
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'pry'
|
||||
Pry.config.history.file = File.expand_path('../../tmp/elasticsearch_development.pry', __FILE__)
|
||||
|
||||
require 'logger'
|
||||
require 'couchbase/model'
|
||||
|
||||
require 'elasticsearch/model'
|
||||
|
||||
# Documents are stored as JSON objects in Riak but have rich
|
||||
# semantics, including validations and associations.
|
||||
class Article < Couchbase::Model
|
||||
attribute :title
|
||||
attribute :published_at
|
||||
|
||||
# view :all, :limit => 10, :descending => true
|
||||
# TODO: Implement view a la
|
||||
# bucket.save_design_doc <<-JSON
|
||||
# {
|
||||
# "_id": "_design/article",
|
||||
# "language": "javascript",
|
||||
# "views": {
|
||||
# "all": {
|
||||
# "map": "function(doc, meta) { emit(doc.id, doc.title); }"
|
||||
# }
|
||||
# }
|
||||
# }
|
||||
# JSON
|
||||
|
||||
end
|
||||
|
||||
# Extend the model with Elasticsearch support
|
||||
#
|
||||
Article.__send__ :extend, Elasticsearch::Model::Client::ClassMethods
|
||||
Article.__send__ :extend, Elasticsearch::Model::Searching::ClassMethods
|
||||
Article.__send__ :extend, Elasticsearch::Model::Naming::ClassMethods
|
||||
|
||||
# Create documents in Riak
|
||||
#
|
||||
Article.create id: '1', title: 'Foo' rescue nil
|
||||
Article.create id: '2', title: 'Bar' rescue nil
|
||||
Article.create id: '3', title: 'Foo Foo' rescue nil
|
||||
|
||||
# Index data into Elasticsearch
|
||||
#
|
||||
client = Elasticsearch::Client.new log:true
|
||||
|
||||
client.indices.delete index: 'articles' rescue nil
|
||||
client.bulk index: 'articles',
|
||||
type: 'article',
|
||||
body: Article.find(['1', '2', '3']).map { |a|
|
||||
{ index: { _id: a.id, data: a.attributes } }
|
||||
},
|
||||
refresh: true
|
||||
|
||||
response = Article.search 'foo', index: 'articles', type: 'article';
|
||||
|
||||
Pry.start(binding, prompt: lambda { |obj, nest_level, _| '> ' },
|
||||
input: StringIO.new('response.records.to_a'),
|
||||
quiet: true)
|
|
@ -1,81 +0,0 @@
|
|||
# DataMapper and Elasticsearch
|
||||
# ============================
|
||||
#
|
||||
# https://github.com/datamapper/dm-core
|
||||
# https://github.com/datamapper/dm-active_model
|
||||
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'pry'
|
||||
Pry.config.history.file = File.expand_path('../../tmp/elasticsearch_development.pry', __FILE__)
|
||||
|
||||
require 'logger'
|
||||
require 'ansi/core'
|
||||
|
||||
require 'data_mapper'
|
||||
require 'dm-active_model'
|
||||
|
||||
require 'active_support/all'
|
||||
|
||||
require 'elasticsearch/model'
|
||||
|
||||
DataMapper::Logger.new(STDOUT, :debug)
|
||||
DataMapper.setup(:default, 'sqlite::memory:')
|
||||
|
||||
class Article
|
||||
include DataMapper::Resource
|
||||
|
||||
property :id, Serial
|
||||
property :title, String
|
||||
property :published_at, DateTime
|
||||
end
|
||||
|
||||
DataMapper.auto_migrate!
|
||||
DataMapper.finalize
|
||||
|
||||
Article.create title: 'Foo'
|
||||
Article.create title: 'Bar'
|
||||
Article.create title: 'Foo Foo'
|
||||
|
||||
# Extend the model with Elasticsearch support
|
||||
#
|
||||
Article.__send__ :include, Elasticsearch::Model
|
||||
|
||||
# The DataMapper adapter
|
||||
#
|
||||
module DataMapperAdapter
|
||||
|
||||
# Implement the interface for fetching records
|
||||
#
|
||||
module Records
|
||||
def records
|
||||
klass.all(id: ids)
|
||||
end
|
||||
|
||||
# ...
|
||||
end
|
||||
|
||||
module Callbacks
|
||||
def self.included(model)
|
||||
model.class_eval do
|
||||
after(:create) { __elasticsearch__.index_document }
|
||||
after(:save) { __elasticsearch__.update_document }
|
||||
after(:destroy) { __elasticsearch__.delete_document }
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Register the adapter
|
||||
#
|
||||
Elasticsearch::Model::Adapter.register(
|
||||
DataMapperAdapter,
|
||||
lambda { |klass| defined?(::DataMapper::Resource) and klass.ancestors.include?(::DataMapper::Resource) }
|
||||
)
|
||||
|
||||
response = Article.search 'foo';
|
||||
|
||||
Pry.start(binding, prompt: lambda { |obj, nest_level, _| '> ' },
|
||||
input: StringIO.new('response.records.to_a'),
|
||||
quiet: true)
|
|
@ -1,68 +0,0 @@
|
|||
# Mongoid and Elasticsearch
|
||||
# =========================
|
||||
#
|
||||
# http://mongoid.org/en/mongoid/index.html
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'pry'
|
||||
Pry.config.history.file = File.expand_path('../../tmp/elasticsearch_development.pry', __FILE__)
|
||||
|
||||
require 'benchmark'
|
||||
require 'logger'
|
||||
require 'ansi/core'
|
||||
require 'mongoid'
|
||||
|
||||
require 'elasticsearch/model'
|
||||
require 'elasticsearch/model/callbacks'
|
||||
|
||||
Mongoid.logger.level = Logger::DEBUG
|
||||
Moped.logger.level = Logger::DEBUG
|
||||
|
||||
Mongoid.connect_to 'articles'
|
||||
|
||||
Elasticsearch::Model.client = Elasticsearch::Client.new host: 'localhost:9200', log: true
|
||||
|
||||
class Article
|
||||
include Mongoid::Document
|
||||
field :id, type: String
|
||||
field :title, type: String
|
||||
field :published_at, type: DateTime
|
||||
attr_accessible :id, :title, :published_at if respond_to? :attr_accessible
|
||||
|
||||
def as_indexed_json(options={})
|
||||
as_json(except: [:id, :_id])
|
||||
end
|
||||
end
|
||||
|
||||
# Extend the model with Elasticsearch support
|
||||
#
|
||||
Article.__send__ :include, Elasticsearch::Model
|
||||
# Article.__send__ :include, Elasticsearch::Model::Callbacks
|
||||
|
||||
# Store data
|
||||
#
|
||||
Article.delete_all
|
||||
Article.create id: '1', title: 'Foo'
|
||||
Article.create id: '2', title: 'Bar'
|
||||
Article.create id: '3', title: 'Foo Foo'
|
||||
|
||||
# Index data
|
||||
#
|
||||
client = Elasticsearch::Client.new host:'localhost:9200', log:true
|
||||
|
||||
client.indices.delete index: 'articles' rescue nil
|
||||
client.bulk index: 'articles',
|
||||
type: 'article',
|
||||
body: Article.all.map { |a| { index: { _id: a.id, data: a.attributes } } },
|
||||
refresh: true
|
||||
|
||||
# puts Benchmark.realtime { 9_875.times { |i| Article.create title: "Foo #{i}" } }
|
||||
|
||||
puts '', '-'*Pry::Terminal.width!
|
||||
|
||||
response = Article.search 'foo';
|
||||
|
||||
Pry.start(binding, prompt: lambda { |obj, nest_level, _| '> ' },
|
||||
input: StringIO.new('response.records.to_a'),
|
||||
quiet: true)
|
|
@ -1,70 +0,0 @@
|
|||
# Ohm for Redis and Elasticsearch
|
||||
# ===============================
|
||||
#
|
||||
# https://github.com/soveran/ohm#example
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'pry'
|
||||
Pry.config.history.file = File.expand_path('../../tmp/elasticsearch_development.pry', __FILE__)
|
||||
|
||||
require 'logger'
|
||||
require 'ansi/core'
|
||||
require 'active_model'
|
||||
require 'ohm'
|
||||
|
||||
require 'elasticsearch/model'
|
||||
|
||||
class Article < Ohm::Model
|
||||
# Include JSON serialization from ActiveModel
|
||||
include ActiveModel::Serializers::JSON
|
||||
|
||||
attribute :title
|
||||
attribute :published_at
|
||||
end
|
||||
|
||||
# Extend the model with Elasticsearch support
|
||||
#
|
||||
Article.__send__ :include, Elasticsearch::Model
|
||||
|
||||
# Register a custom adapter
|
||||
#
|
||||
module Elasticsearch
|
||||
module Model
|
||||
module Adapter
|
||||
module Ohm
|
||||
Adapter.register self,
|
||||
lambda { |klass| defined?(::Ohm::Model) and klass.ancestors.include?(::Ohm::Model) }
|
||||
module Records
|
||||
def records
|
||||
klass.fetch(@ids)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Configure the Elasticsearch client to log operations
|
||||
#
|
||||
Elasticsearch::Model.client = Elasticsearch::Client.new log: true
|
||||
|
||||
puts '', '-'*Pry::Terminal.width!
|
||||
|
||||
Article.all.map { |a| a.delete }
|
||||
Article.create id: '1', title: 'Foo'
|
||||
Article.create id: '2', title: 'Bar'
|
||||
Article.create id: '3', title: 'Foo Foo'
|
||||
|
||||
Article.__elasticsearch__.client.indices.delete index: 'articles' rescue nil
|
||||
Article.__elasticsearch__.client.bulk index: 'articles',
|
||||
type: 'article',
|
||||
body: Article.all.map { |a| { index: { _id: a.id, data: a.attributes } } },
|
||||
refresh: true
|
||||
|
||||
|
||||
response = Article.search 'foo', index: 'articles', type: 'article';
|
||||
|
||||
Pry.start(binding, prompt: lambda { |obj, nest_level, _| '> ' },
|
||||
input: StringIO.new('response.records.to_a'),
|
||||
quiet: true)
|
|
@ -1,52 +0,0 @@
|
|||
# Riak and Elasticsearch
|
||||
# ======================
|
||||
#
|
||||
# https://github.com/basho-labs/ripple
|
||||
|
||||
$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
|
||||
|
||||
require 'pry'
|
||||
Pry.config.history.file = File.expand_path('../../tmp/elasticsearch_development.pry', __FILE__)
|
||||
|
||||
require 'logger'
|
||||
require 'ripple'
|
||||
|
||||
require 'elasticsearch/model'
|
||||
|
||||
# Documents are stored as JSON objects in Riak but have rich
|
||||
# semantics, including validations and associations.
|
||||
class Article
|
||||
include Ripple::Document
|
||||
|
||||
property :title, String
|
||||
property :published_at, Time, :default => proc { Time.now }
|
||||
end
|
||||
|
||||
# Extend the model with Elasticsearch support
|
||||
#
|
||||
Article.__send__ :include, Elasticsearch::Model
|
||||
|
||||
# Create documents in Riak
|
||||
#
|
||||
Article.destroy_all
|
||||
Article.create id: '1', title: 'Foo'
|
||||
Article.create id: '2', title: 'Bar'
|
||||
Article.create id: '3', title: 'Foo Foo'
|
||||
|
||||
# Index data into Elasticsearch
|
||||
#
|
||||
client = Elasticsearch::Client.new log:true
|
||||
|
||||
client.indices.delete index: 'articles' rescue nil
|
||||
client.bulk index: 'articles',
|
||||
type: 'article',
|
||||
body: Article.all.map { |a|
|
||||
{ index: { _id: a.key, data: JSON.parse(a.robject.raw_data) } }
|
||||
}.as_json,
|
||||
refresh: true
|
||||
|
||||
response = Article.search 'foo';
|
||||
|
||||
Pry.start(binding, prompt: lambda { |obj, nest_level, _| '> ' },
|
||||
input: StringIO.new('response.records.to_a'),
|
||||
quiet: true)
|
|
@ -1,18 +0,0 @@
|
|||
# Usage:
|
||||
#
|
||||
# $ BUNDLE_GEMFILE=./gemfiles/3.0.gemfile bundle install
|
||||
# $ BUNDLE_GEMFILE=./gemfiles/3.0.gemfile bundle exec rake test:integration
|
||||
|
||||
source 'https://rubygems.org'
|
||||
|
||||
gemspec path: '../'
|
||||
|
||||
gem 'activemodel', '>= 3.0'
|
||||
gem 'activerecord', '~> 3.2'
|
||||
gem 'mongoid', '>= 3.0'
|
||||
gem 'sqlite3', '~> 1.3.6' unless defined?(JRUBY_VERSION)
|
||||
|
||||
group :development, :testing do
|
||||
gem 'rspec'
|
||||
gem 'pry-nav'
|
||||
end
|
|
@ -1,18 +0,0 @@
|
|||
# Usage:
|
||||
#
|
||||
# $ BUNDLE_GEMFILE=./gemfiles/4.0.gemfile bundle install
|
||||
# $ BUNDLE_GEMFILE=./gemfiles/4.0.gemfile bundle exec rake test:integration
|
||||
|
||||
source 'https://rubygems.org'
|
||||
|
||||
gemspec path: '../'
|
||||
|
||||
gem 'activemodel', '~> 4'
|
||||
gem 'activerecord', '~> 4'
|
||||
gem 'sqlite3', '~> 1.3.6' unless defined?(JRUBY_VERSION)
|
||||
gem 'mongoid', '~> 5'
|
||||
|
||||
group :development, :testing do
|
||||
gem 'rspec'
|
||||
gem 'pry-nav'
|
||||
end
|
|
@ -1,18 +0,0 @@
|
|||
# Usage:
|
||||
#
|
||||
# $ BUNDLE_GEMFILE=./gemfiles/5.0.gemfile bundle install
|
||||
# $ BUNDLE_GEMFILE=./gemfiles/5.0.gemfile bundle exec rake test:integration
|
||||
|
||||
source 'https://rubygems.org'
|
||||
|
||||
gemspec path: '../'
|
||||
|
||||
gem 'activemodel', '~> 5'
|
||||
gem 'activerecord', '~> 5'
|
||||
gem 'sqlite3' unless defined?(JRUBY_VERSION)
|
||||
gem 'mongoid', '~> 6'
|
||||
|
||||
group :development, :testing do
|
||||
gem 'rspec'
|
||||
gem 'pry-nav'
|
||||
end
|
|
@ -1,220 +0,0 @@
|
|||
require 'hashie/mash'
|
||||
|
||||
require 'active_support/core_ext/module/delegation'
|
||||
|
||||
require 'elasticsearch'
|
||||
|
||||
require 'elasticsearch/model/version'
|
||||
|
||||
require 'elasticsearch/model/hash_wrapper'
|
||||
require 'elasticsearch/model/client'
|
||||
|
||||
require 'elasticsearch/model/multimodel'
|
||||
|
||||
require 'elasticsearch/model/adapter'
|
||||
require 'elasticsearch/model/adapters/default'
|
||||
require 'elasticsearch/model/adapters/active_record'
|
||||
require 'elasticsearch/model/adapters/mongoid'
|
||||
require 'elasticsearch/model/adapters/multiple'
|
||||
|
||||
require 'elasticsearch/model/importing'
|
||||
require 'elasticsearch/model/indexing'
|
||||
require 'elasticsearch/model/naming'
|
||||
require 'elasticsearch/model/serializing'
|
||||
require 'elasticsearch/model/searching'
|
||||
require 'elasticsearch/model/callbacks'
|
||||
|
||||
require 'elasticsearch/model/proxy'
|
||||
|
||||
require 'elasticsearch/model/response'
|
||||
require 'elasticsearch/model/response/base'
|
||||
require 'elasticsearch/model/response/result'
|
||||
require 'elasticsearch/model/response/results'
|
||||
require 'elasticsearch/model/response/records'
|
||||
require 'elasticsearch/model/response/pagination'
|
||||
require 'elasticsearch/model/response/aggregations'
|
||||
require 'elasticsearch/model/response/suggestions'
|
||||
|
||||
require 'elasticsearch/model/ext/active_record'
|
||||
|
||||
case
|
||||
when defined?(::Kaminari)
|
||||
Elasticsearch::Model::Response::Response.__send__ :include, Elasticsearch::Model::Response::Pagination::Kaminari
|
||||
when defined?(::WillPaginate)
|
||||
Elasticsearch::Model::Response::Response.__send__ :include, Elasticsearch::Model::Response::Pagination::WillPaginate
|
||||
end
|
||||
|
||||
module Elasticsearch
|
||||
|
||||
# Elasticsearch integration for Ruby models
|
||||
# =========================================
|
||||
#
|
||||
# `Elasticsearch::Model` contains modules for integrating the Elasticsearch search and analytical engine
|
||||
# with ActiveModel-based classes, or models, for the Ruby programming language.
|
||||
#
|
||||
# It facilitates importing your data into an index, automatically updating it when a record changes,
|
||||
# searching the specific index, setting up the index mapping or the model JSON serialization.
|
||||
#
|
||||
# When the `Elasticsearch::Model` module is included in your class, it automatically extends it
|
||||
# with the functionality; see {Elasticsearch::Model.included}. Most methods are available via
|
||||
# the `__elasticsearch__` class and instance method proxies.
|
||||
#
|
||||
# It is possible to include/extend the model with the corresponding
|
||||
# modules directly, if that is desired:
|
||||
#
|
||||
# MyModel.__send__ :extend, Elasticsearch::Model::Client::ClassMethods
|
||||
# MyModel.__send__ :include, Elasticsearch::Model::Client::InstanceMethods
|
||||
# MyModel.__send__ :extend, Elasticsearch::Model::Searching::ClassMethods
|
||||
# # ...
|
||||
#
|
||||
module Model
|
||||
METHODS = [:search, :mapping, :mappings, :settings, :index_name, :document_type, :import]
|
||||
|
||||
# Adds the `Elasticsearch::Model` functionality to the including class.
|
||||
#
|
||||
# * Creates the `__elasticsearch__` class and instance methods, pointing to the proxy object
|
||||
# * Includes the necessary modules in the proxy classes
|
||||
# * Sets up delegation for crucial methods such as `search`, etc.
|
||||
#
|
||||
# @example Include the module in the `Article` model definition
|
||||
#
|
||||
# class Article < ActiveRecord::Base
|
||||
# include Elasticsearch::Model
|
||||
# end
|
||||
#
|
||||
# @example Inject the module into the `Article` model during run time
|
||||
#
|
||||
# Article.__send__ :include, Elasticsearch::Model
|
||||
#
|
||||
#
|
||||
def self.included(base)
|
||||
base.class_eval do
|
||||
include Elasticsearch::Model::Proxy
|
||||
|
||||
Elasticsearch::Model::Proxy::ClassMethodsProxy.class_eval do
|
||||
include Elasticsearch::Model::Client::ClassMethods
|
||||
include Elasticsearch::Model::Naming::ClassMethods
|
||||
include Elasticsearch::Model::Indexing::ClassMethods
|
||||
include Elasticsearch::Model::Searching::ClassMethods
|
||||
end
|
||||
|
||||
Elasticsearch::Model::Proxy::InstanceMethodsProxy.class_eval do
|
||||
include Elasticsearch::Model::Client::InstanceMethods
|
||||
include Elasticsearch::Model::Naming::InstanceMethods
|
||||
include Elasticsearch::Model::Indexing::InstanceMethods
|
||||
include Elasticsearch::Model::Serializing::InstanceMethods
|
||||
end
|
||||
|
||||
Elasticsearch::Model::Proxy::InstanceMethodsProxy.class_eval <<-CODE, __FILE__, __LINE__ + 1
|
||||
def as_indexed_json(options={})
|
||||
target.respond_to?(:as_indexed_json) ? target.__send__(:as_indexed_json, options) : super
|
||||
end
|
||||
CODE
|
||||
|
||||
# Delegate important methods to the `__elasticsearch__` proxy, unless they are defined already
|
||||
#
|
||||
class << self
|
||||
METHODS.each do |method|
|
||||
delegate method, to: :__elasticsearch__ unless self.public_instance_methods.include?(method)
|
||||
end
|
||||
end
|
||||
|
||||
# Mix the importing module into the proxy
|
||||
#
|
||||
self.__elasticsearch__.class_eval do
|
||||
include Elasticsearch::Model::Importing::ClassMethods
|
||||
include Adapter.from_class(base).importing_mixin
|
||||
end
|
||||
|
||||
# Add to the registry if it's a class (and not in intermediate module)
|
||||
Registry.add(base) if base.is_a?(Class)
|
||||
end
|
||||
end
|
||||
|
||||
module ClassMethods
|
||||
# Get the client common for all models
|
||||
#
|
||||
# @example Get the client
|
||||
#
|
||||
# Elasticsearch::Model.client
|
||||
# => #<Elasticsearch::Transport::Client:0x007f96a7d0d000 @transport=... >
|
||||
#
|
||||
def client
|
||||
@client ||= Elasticsearch::Client.new
|
||||
end
|
||||
|
||||
# Set the client for all models
|
||||
#
|
||||
# @example Configure (set) the client for all models
|
||||
#
|
||||
# Elasticsearch::Model.client = Elasticsearch::Client.new host: 'http://localhost:9200', tracer: true
|
||||
# => #<Elasticsearch::Transport::Client:0x007f96a6dd0d80 @transport=... >
|
||||
#
|
||||
# @note You have to set the client before you call Elasticsearch methods on the model,
|
||||
# or set it directly on the model; see {Elasticsearch::Model::Client::ClassMethods#client}
|
||||
#
|
||||
def client=(client)
|
||||
@client = client
|
||||
end
|
||||
|
||||
# Search across multiple models
|
||||
#
|
||||
# By default, all models which include the `Elasticsearch::Model` module are searched
|
||||
#
|
||||
# @param query_or_payload [String,Hash,Object] The search request definition
|
||||
# (string, JSON, Hash, or object responding to `to_hash`)
|
||||
# @param models [Array] The Array of Model objects to search
|
||||
# @param options [Hash] Optional parameters to be passed to the Elasticsearch client
|
||||
#
|
||||
# @return [Elasticsearch::Model::Response::Response]
|
||||
#
|
||||
# @example Search across specific models
|
||||
#
|
||||
# Elasticsearch::Model.search('foo', [Author, Article])
|
||||
#
|
||||
# @example Search across all models which include the `Elasticsearch::Model` module
|
||||
#
|
||||
# Elasticsearch::Model.search('foo')
|
||||
#
|
||||
def search(query_or_payload, models=[], options={})
|
||||
models = Multimodel.new(models)
|
||||
request = Searching::SearchRequest.new(models, query_or_payload, options)
|
||||
Response::Response.new(models, request)
|
||||
end
|
||||
|
||||
# Check if inheritance is enabled
|
||||
#
|
||||
# @note Inheritance is disabled by default.
|
||||
#
|
||||
def inheritance_enabled
|
||||
@settings[:inheritance_enabled] ||= false
|
||||
end
|
||||
|
||||
# Enable inheritance of index_name and document_type
|
||||
#
|
||||
# @example Enable inheritance
|
||||
#
|
||||
# Elasticsearch::Model.inheritance_enabled = true
|
||||
#
|
||||
def inheritance_enabled=(inheritance_enabled)
|
||||
warn STI_DEPRECATION_WARNING if inheritance_enabled
|
||||
@settings[:inheritance_enabled] = inheritance_enabled
|
||||
end
|
||||
|
||||
# Access the module settings
|
||||
#
|
||||
def settings
|
||||
@settings ||= {}
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
STI_DEPRECATION_WARNING = "DEPRECATION WARNING: Support for Single Table Inheritance (STI) is deprecated " +
|
||||
"and will be removed in version 7.0.0.\nPlease save different model documents in separate indices and refer " +
|
||||
"to the Elasticsearch documentation for more information.".freeze
|
||||
end
|
||||
extend ClassMethods
|
||||
|
||||
class NotImplemented < NoMethodError; end
|
||||
end
|
||||
end
|
|
@ -1,145 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Contains an adapter which provides OxM-specific implementations for common behaviour:
|
||||
#
|
||||
# * {Adapter::Adapter#records_mixin Fetching records from the database}
|
||||
# * {Adapter::Adapter#callbacks_mixin Model callbacks for automatic index updates}
|
||||
# * {Adapter::Adapter#importing_mixin Efficient bulk loading from the database}
|
||||
#
|
||||
# @see Elasticsearch::Model::Adapter::Default
|
||||
# @see Elasticsearch::Model::Adapter::ActiveRecord
|
||||
# @see Elasticsearch::Model::Adapter::Mongoid
|
||||
#
|
||||
module Adapter
|
||||
|
||||
# Returns an adapter based on the Ruby class passed
|
||||
#
|
||||
# @example Create an adapter for an ActiveRecord-based model
|
||||
#
|
||||
# class Article < ActiveRecord::Base; end
|
||||
#
|
||||
# myadapter = Elasticsearch::Model::Adapter.from_class(Article)
|
||||
# myadapter.adapter
|
||||
# # => Elasticsearch::Model::Adapter::ActiveRecord
|
||||
#
|
||||
# @see Adapter.adapters The list of included adapters
|
||||
# @see Adapter.register Register a custom adapter
|
||||
#
|
||||
def from_class(klass)
|
||||
Adapter.new(klass)
|
||||
end; module_function :from_class
|
||||
|
||||
# Returns registered adapters
|
||||
#
|
||||
# @see ::Elasticsearch::Model::Adapter::Adapter.adapters
|
||||
#
|
||||
def adapters
|
||||
Adapter.adapters
|
||||
end; module_function :adapters
|
||||
|
||||
# Registers an adapter
|
||||
#
|
||||
# @see ::Elasticsearch::Model::Adapter::Adapter.register
|
||||
#
|
||||
def register(name, condition)
|
||||
Adapter.register(name, condition)
|
||||
end; module_function :register
|
||||
|
||||
# Contains an adapter for specific OxM or architecture.
|
||||
#
|
||||
class Adapter
|
||||
attr_reader :klass
|
||||
|
||||
def initialize(klass)
|
||||
@klass = klass
|
||||
end
|
||||
|
||||
# Registers an adapter for specific condition
|
||||
#
|
||||
# @param name [Module] The module containing the implemented interface
|
||||
# @param condition [Proc] An object with a `call` method which is evaluated in {.adapter}
|
||||
#
|
||||
# @example Register an adapter for DataMapper
|
||||
#
|
||||
# module DataMapperAdapter
|
||||
#
|
||||
# # Implement the interface for fetching records
|
||||
# #
|
||||
# module Records
|
||||
# def records
|
||||
# klass.all(id: @ids)
|
||||
# end
|
||||
#
|
||||
# # ...
|
||||
# end
|
||||
# end
|
||||
#
|
||||
# # Register the adapter
|
||||
# #
|
||||
# Elasticsearch::Model::Adapter.register(
|
||||
# DataMapperAdapter,
|
||||
# lambda { |klass|
|
||||
# defined?(::DataMapper::Resource) and klass.ancestors.include?(::DataMapper::Resource)
|
||||
# }
|
||||
# )
|
||||
#
|
||||
def self.register(name, condition)
|
||||
self.adapters[name] = condition
|
||||
end
|
||||
|
||||
# Return the collection of registered adapters
|
||||
#
|
||||
# @example Return the currently registered adapters
|
||||
#
|
||||
# Elasticsearch::Model::Adapter.adapters
|
||||
# # => {
|
||||
# # Elasticsearch::Model::Adapter::ActiveRecord => #<Proc:0x007...(lambda)>,
|
||||
# # Elasticsearch::Model::Adapter::Mongoid => #<Proc:0x007... (lambda)>,
|
||||
# # }
|
||||
#
|
||||
# @return [Hash] The collection of adapters
|
||||
#
|
||||
def self.adapters
|
||||
@adapters ||= {}
|
||||
end
|
||||
|
||||
# Return the module with {Default::Records} interface implementation
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def records_mixin
|
||||
adapter.const_get(:Records)
|
||||
end
|
||||
|
||||
# Return the module with {Default::Callbacks} interface implementation
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def callbacks_mixin
|
||||
adapter.const_get(:Callbacks)
|
||||
end
|
||||
|
||||
# Return the module with {Default::Importing} interface implementation
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def importing_mixin
|
||||
adapter.const_get(:Importing)
|
||||
end
|
||||
|
||||
# Returns the adapter module
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def adapter
|
||||
@adapter ||= begin
|
||||
self.class.adapters.find( lambda {[]} ) { |name, condition| condition.call(klass) }.first \
|
||||
|| Elasticsearch::Model::Adapter::Default
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,101 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Adapter
|
||||
|
||||
# An adapter for ActiveRecord-based models
|
||||
#
|
||||
module ActiveRecord
|
||||
|
||||
Adapter.register self,
|
||||
lambda { |klass| !!defined?(::ActiveRecord::Base) && klass.respond_to?(:ancestors) && klass.ancestors.include?(::ActiveRecord::Base) }
|
||||
|
||||
module Records
|
||||
attr_writer :options
|
||||
|
||||
def options
|
||||
@options ||= {}
|
||||
end
|
||||
|
||||
# Returns an `ActiveRecord::Relation` instance
|
||||
#
|
||||
def records
|
||||
sql_records = klass.where(klass.primary_key => ids)
|
||||
sql_records = sql_records.includes(self.options[:includes]) if self.options[:includes]
|
||||
|
||||
# Re-order records based on the order from Elasticsearch hits
|
||||
# by redefining `to_a`, unless the user has called `order()`
|
||||
#
|
||||
sql_records.instance_exec(response.response['hits']['hits']) do |hits|
|
||||
ar_records_method_name = :to_a
|
||||
ar_records_method_name = :records if defined?(::ActiveRecord) && ::ActiveRecord::VERSION::MAJOR >= 5
|
||||
|
||||
define_singleton_method(ar_records_method_name) do
|
||||
if defined?(::ActiveRecord) && ::ActiveRecord::VERSION::MAJOR >= 4
|
||||
self.load
|
||||
else
|
||||
self.__send__(:exec_queries)
|
||||
end
|
||||
if !self.order_values.present?
|
||||
@records.sort_by { |record| hits.index { |hit| hit['_id'].to_s == record.id.to_s } }
|
||||
else
|
||||
@records
|
||||
end
|
||||
end if self
|
||||
end
|
||||
|
||||
sql_records
|
||||
end
|
||||
|
||||
# Prevent clash with `ActiveSupport::Dependencies::Loadable`
|
||||
#
|
||||
def load
|
||||
records.__send__(:load)
|
||||
end
|
||||
end
|
||||
|
||||
module Callbacks
|
||||
|
||||
# Handle index updates (creating, updating or deleting documents)
|
||||
# when the model changes, by hooking into the lifecycle
|
||||
#
|
||||
# @see http://guides.rubyonrails.org/active_record_callbacks.html
|
||||
#
|
||||
def self.included(base)
|
||||
base.class_eval do
|
||||
after_commit lambda { __elasticsearch__.index_document }, on: :create
|
||||
after_commit lambda { __elasticsearch__.update_document }, on: :update
|
||||
after_commit lambda { __elasticsearch__.delete_document }, on: :destroy
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
module Importing
|
||||
|
||||
# Fetch batches of records from the database (used by the import method)
|
||||
#
|
||||
#
|
||||
# @see http://api.rubyonrails.org/classes/ActiveRecord/Batches.html ActiveRecord::Batches.find_in_batches
|
||||
#
|
||||
def __find_in_batches(options={}, &block)
|
||||
query = options.delete(:query)
|
||||
named_scope = options.delete(:scope)
|
||||
preprocess = options.delete(:preprocess)
|
||||
|
||||
scope = self
|
||||
scope = scope.__send__(named_scope) if named_scope
|
||||
scope = scope.instance_exec(&query) if query
|
||||
|
||||
scope.find_in_batches(options) do |batch|
|
||||
batch = self.__send__(preprocess, batch) if preprocess
|
||||
yield(batch) if batch.present?
|
||||
end
|
||||
end
|
||||
|
||||
def __transform
|
||||
lambda { |model| { index: { _id: model.id, data: model.__elasticsearch__.as_indexed_json } } }
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,50 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Adapter
|
||||
|
||||
# The default adapter for models which haven't one registered
|
||||
#
|
||||
module Default
|
||||
|
||||
# Module for implementing methods and logic related to fetching records from the database
|
||||
#
|
||||
module Records
|
||||
|
||||
# Return the collection of records fetched from the database
|
||||
#
|
||||
# By default uses `MyModel#find[1, 2, 3]`
|
||||
#
|
||||
def records
|
||||
klass.find(@ids)
|
||||
end
|
||||
end
|
||||
|
||||
# Module for implementing methods and logic related to hooking into model lifecycle
|
||||
# (e.g. to perform automatic index updates)
|
||||
#
|
||||
# @see http://api.rubyonrails.org/classes/ActiveModel/Callbacks.html
|
||||
module Callbacks
|
||||
# noop
|
||||
end
|
||||
|
||||
# Module for efficiently fetching records from the database to import them into the index
|
||||
#
|
||||
module Importing
|
||||
|
||||
# @abstract Implement this method in your adapter
|
||||
#
|
||||
def __find_in_batches(options={}, &block)
|
||||
raise NotImplemented, "Method not implemented for default adapter"
|
||||
end
|
||||
|
||||
# @abstract Implement this method in your adapter
|
||||
#
|
||||
def __transform
|
||||
raise NotImplemented, "Method not implemented for default adapter"
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,89 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Adapter
|
||||
|
||||
# An adapter for Mongoid-based models
|
||||
#
|
||||
# @see http://mongoid.org
|
||||
#
|
||||
module Mongoid
|
||||
|
||||
Adapter.register self,
|
||||
lambda { |klass| !!defined?(::Mongoid::Document) && klass.respond_to?(:ancestors) && klass.ancestors.include?(::Mongoid::Document) }
|
||||
|
||||
module Records
|
||||
|
||||
# Return a `Mongoid::Criteria` instance
|
||||
#
|
||||
def records
|
||||
criteria = klass.where(:id.in => ids)
|
||||
|
||||
criteria.instance_exec(response.response['hits']['hits']) do |hits|
|
||||
define_singleton_method :to_a do
|
||||
self.entries.sort_by { |e| hits.index { |hit| hit['_id'].to_s == e.id.to_s } }
|
||||
end
|
||||
end
|
||||
|
||||
criteria
|
||||
end
|
||||
|
||||
# Intercept call to sorting methods, so we can ignore the order from Elasticsearch
|
||||
#
|
||||
%w| asc desc order_by |.each do |name|
|
||||
define_method name do |*args|
|
||||
criteria = records.__send__ name, *args
|
||||
criteria.instance_exec do
|
||||
define_singleton_method(:to_a) { self.entries }
|
||||
end
|
||||
|
||||
criteria
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
module Callbacks
|
||||
|
||||
# Handle index updates (creating, updating or deleting documents)
|
||||
# when the model changes, by hooking into the lifecycle
|
||||
#
|
||||
# @see http://mongoid.org/en/mongoid/docs/callbacks.html
|
||||
#
|
||||
def self.included(base)
|
||||
base.after_create { |document| document.__elasticsearch__.index_document }
|
||||
base.after_update { |document| document.__elasticsearch__.update_document }
|
||||
base.after_destroy { |document| document.__elasticsearch__.delete_document }
|
||||
end
|
||||
end
|
||||
|
||||
module Importing
|
||||
|
||||
# Fetch batches of records from the database
|
||||
#
|
||||
# @see https://github.com/mongoid/mongoid/issues/1334
|
||||
# @see https://github.com/karmi/retire/pull/724
|
||||
#
|
||||
def __find_in_batches(options={}, &block)
|
||||
batch_size = options[:batch_size] || 1_000
|
||||
query = options[:query]
|
||||
named_scope = options[:scope]
|
||||
preprocess = options[:preprocess]
|
||||
|
||||
scope = all
|
||||
scope = scope.send(named_scope) if named_scope
|
||||
scope = query.is_a?(Proc) ? scope.class_exec(&query) : scope.where(query) if query
|
||||
|
||||
scope.no_timeout.each_slice(batch_size) do |items|
|
||||
yield (preprocess ? self.__send__(preprocess, items) : items)
|
||||
end
|
||||
end
|
||||
|
||||
def __transform
|
||||
lambda {|a| { index: { _id: a.id.to_s, data: a.as_indexed_json } }}
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,112 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Adapter
|
||||
|
||||
# An adapter to be used for deserializing results from multiple models,
|
||||
# retrieved through `Elasticsearch::Model.search`
|
||||
#
|
||||
# @see Elasticsearch::Model.search
|
||||
#
|
||||
module Multiple
|
||||
Adapter.register self, lambda { |klass| klass.is_a? Multimodel }
|
||||
|
||||
module Records
|
||||
# Returns a collection of model instances, possibly of different classes (ActiveRecord, Mongoid, ...)
|
||||
#
|
||||
# @note The order of results in the Elasticsearch response is preserved
|
||||
#
|
||||
def records
|
||||
records_by_type = __records_by_type
|
||||
|
||||
records = response.response["hits"]["hits"].map do |hit|
|
||||
records_by_type[ __type_for_hit(hit) ][ hit[:_id] ]
|
||||
end
|
||||
|
||||
records.compact
|
||||
end
|
||||
|
||||
# Returns the collection of records grouped by class based on `_type`
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# {
|
||||
# Foo => {"1"=> #<Foo id: 1, title: "ABC"}, ...},
|
||||
# Bar => {"1"=> #<Bar id: 1, name: "XYZ"}, ...}
|
||||
# }
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def __records_by_type
|
||||
result = __ids_by_type.map do |klass, ids|
|
||||
records = __records_for_klass(klass, ids)
|
||||
ids = records.map(&:id).map(&:to_s)
|
||||
[ klass, Hash[ids.zip(records)] ]
|
||||
end
|
||||
|
||||
Hash[result]
|
||||
end
|
||||
|
||||
# Returns the collection of records for a specific type based on passed `klass`
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def __records_for_klass(klass, ids)
|
||||
adapter = __adapter_for_klass(klass)
|
||||
|
||||
case
|
||||
when Elasticsearch::Model::Adapter::ActiveRecord.equal?(adapter)
|
||||
klass.where(klass.primary_key => ids)
|
||||
when Elasticsearch::Model::Adapter::Mongoid.equal?(adapter)
|
||||
klass.where(:id.in => ids)
|
||||
else
|
||||
klass.find(ids)
|
||||
end
|
||||
end
|
||||
|
||||
# Returns the record IDs grouped by class based on type `_type`
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# { Foo => ["1"], Bar => ["1", "5"] }
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def __ids_by_type
|
||||
ids_by_type = {}
|
||||
|
||||
response.response["hits"]["hits"].each do |hit|
|
||||
type = __type_for_hit(hit)
|
||||
ids_by_type[type] ||= []
|
||||
ids_by_type[type] << hit[:_id]
|
||||
end
|
||||
ids_by_type
|
||||
end
|
||||
|
||||
# Returns the class of the model corresponding to a specific `hit` in Elasticsearch results
|
||||
#
|
||||
# @see Elasticsearch::Model::Registry
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def __type_for_hit(hit)
|
||||
@@__types ||= {}
|
||||
|
||||
@@__types[ "#{hit[:_index]}::#{hit[:_type]}" ] ||= begin
|
||||
Registry.all.detect do |model|
|
||||
model.index_name == hit[:_index] && model.document_type == hit[:_type]
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Returns the adapter registered for a particular `klass` or `nil` if not available
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def __adapter_for_klass(klass)
|
||||
Adapter.adapters.select { |name, checker| checker.call(klass) }.keys.first
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,35 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Allows to automatically update index based on model changes,
|
||||
# by hooking into the model lifecycle.
|
||||
#
|
||||
# @note A blocking HTTP request is done during the update process.
|
||||
# If you need a more performant/resilient way of updating the index,
|
||||
# consider adapting the callbacks behaviour, and use a background
|
||||
# processing solution such as [Sidekiq](http://sidekiq.org)
|
||||
# or [Resque](https://github.com/resque/resque).
|
||||
#
|
||||
module Callbacks
|
||||
|
||||
# When included in a model, automatically injects the callback subscribers (`after_save`, etc)
|
||||
#
|
||||
# @example Automatically update Elasticsearch index when the model changes
|
||||
#
|
||||
# class Article
|
||||
# include Elasticsearch::Model
|
||||
# include Elasticsearch::Model::Callbacks
|
||||
# end
|
||||
#
|
||||
# Article.first.update_attribute :title, 'Updated'
|
||||
# # SQL (0.3ms) UPDATE "articles" SET "title" = ...
|
||||
# # 2013-11-20 15:08:52 +0100: POST http://localhost:9200/articles/article/1/_update ...
|
||||
#
|
||||
def self.included(base)
|
||||
adapter = Adapter.from_class(base)
|
||||
base.__send__ :include, adapter.callbacks_mixin
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,61 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Contains an `Elasticsearch::Client` instance
|
||||
#
|
||||
module Client
|
||||
|
||||
module ClassMethods
|
||||
|
||||
# Get the client for a specific model class
|
||||
#
|
||||
# @example Get the client for `Article` and perform API request
|
||||
#
|
||||
# Article.client.cluster.health
|
||||
# # => { "cluster_name" => "elasticsearch" ... }
|
||||
#
|
||||
def client client=nil
|
||||
@client ||= Elasticsearch::Model.client
|
||||
end
|
||||
|
||||
# Set the client for a specific model class
|
||||
#
|
||||
# @example Configure the client for the `Article` model
|
||||
#
|
||||
# Article.client = Elasticsearch::Client.new host: 'http://api.server:8080'
|
||||
# Article.search ...
|
||||
#
|
||||
def client=(client)
|
||||
@client = client
|
||||
end
|
||||
end
|
||||
|
||||
module InstanceMethods
|
||||
|
||||
# Get or set the client for a specific model instance
|
||||
#
|
||||
# @example Get the client for a specific record and perform API request
|
||||
#
|
||||
# @article = Article.first
|
||||
# @article.client.info
|
||||
# # => { "name" => "Node-1", ... }
|
||||
#
|
||||
def client
|
||||
@client ||= self.class.client
|
||||
end
|
||||
|
||||
# Set the client for a specific model instance
|
||||
#
|
||||
# @example Set the client for a specific record
|
||||
#
|
||||
# @article = Article.first
|
||||
# @article.client = Elasticsearch::Client.new host: 'http://api.server:8080'
|
||||
#
|
||||
def client=(client)
|
||||
@client = client
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,14 +0,0 @@
|
|||
# Prevent `MyModel.inspect` failing with `ActiveRecord::ConnectionNotEstablished`
|
||||
# (triggered by elasticsearch-model/lib/elasticsearch/model.rb:79:in `included')
|
||||
#
|
||||
ActiveRecord::Base.instance_eval do
|
||||
class << self
|
||||
def inspect_with_rescue
|
||||
inspect_without_rescue
|
||||
rescue ActiveRecord::ConnectionNotEstablished
|
||||
"#{self}(no database connection)"
|
||||
end
|
||||
|
||||
alias_method_chain :inspect, :rescue
|
||||
end
|
||||
end if defined?(ActiveRecord) && ActiveRecord::VERSION::STRING < '4'
|
|
@ -1,15 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Subclass of `Hashie::Mash` to wrap Hash-like structures
|
||||
# (responses from Elasticsearch, search definitions, etc)
|
||||
#
|
||||
# The primary goal of the subclass is to disable the
|
||||
# warning being printed by Hashie for re-defined
|
||||
# methods, such as `sort`.
|
||||
#
|
||||
class HashWrapper < ::Hashie::Mash
|
||||
disable_warnings if respond_to?(:disable_warnings)
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,151 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Provides support for easily and efficiently importing large amounts of
|
||||
# records from the including class into the index.
|
||||
#
|
||||
# @see ClassMethods#import
|
||||
#
|
||||
module Importing
|
||||
|
||||
# When included in a model, adds the importing methods.
|
||||
#
|
||||
# @example Import all records from the `Article` model
|
||||
#
|
||||
# Article.import
|
||||
#
|
||||
# @see #import
|
||||
#
|
||||
def self.included(base)
|
||||
base.__send__ :extend, ClassMethods
|
||||
|
||||
adapter = Adapter.from_class(base)
|
||||
base.__send__ :include, adapter.importing_mixin
|
||||
base.__send__ :extend, adapter.importing_mixin
|
||||
end
|
||||
|
||||
module ClassMethods
|
||||
|
||||
# Import all model records into the index
|
||||
#
|
||||
# The method will pick up correct strategy based on the `Importing` module
|
||||
# defined in the corresponding adapter.
|
||||
#
|
||||
# @param options [Hash] Options passed to the underlying `__find_in_batches`method
|
||||
# @param block [Proc] Optional block to evaluate for each batch
|
||||
#
|
||||
# @yield [Hash] Gives the Hash with the Elasticsearch response to the block
|
||||
#
|
||||
# @return [Fixnum] Number of errors encountered during importing
|
||||
#
|
||||
# @example Import all records into the index
|
||||
#
|
||||
# Article.import
|
||||
#
|
||||
# @example Set the batch size to 100
|
||||
#
|
||||
# Article.import batch_size: 100
|
||||
#
|
||||
# @example Process the response from Elasticsearch
|
||||
#
|
||||
# Article.import do |response|
|
||||
# puts "Got " + response['items'].select { |i| i['index']['error'] }.size.to_s + " errors"
|
||||
# end
|
||||
#
|
||||
# @example Delete and create the index with appropriate settings and mappings
|
||||
#
|
||||
# Article.import force: true
|
||||
#
|
||||
# @example Refresh the index after importing all batches
|
||||
#
|
||||
# Article.import refresh: true
|
||||
#
|
||||
# @example Import the records into a different index/type than the default one
|
||||
#
|
||||
# Article.import index: 'my-new-index', type: 'my-other-type'
|
||||
#
|
||||
# @example Pass an ActiveRecord scope to limit the imported records
|
||||
#
|
||||
# Article.import scope: 'published'
|
||||
#
|
||||
# @example Pass an ActiveRecord query to limit the imported records
|
||||
#
|
||||
# Article.import query: -> { where(author_id: author_id) }
|
||||
#
|
||||
# @example Transform records during the import with a lambda
|
||||
#
|
||||
# transform = lambda do |a|
|
||||
# {index: {_id: a.id, _parent: a.author_id, data: a.__elasticsearch__.as_indexed_json}}
|
||||
# end
|
||||
#
|
||||
# Article.import transform: transform
|
||||
#
|
||||
# @example Update the batch before yielding it
|
||||
#
|
||||
# class Article
|
||||
# # ...
|
||||
# def self.enrich(batch)
|
||||
# batch.each do |item|
|
||||
# item.metadata = MyAPI.get_metadata(item.id)
|
||||
# end
|
||||
# batch
|
||||
# end
|
||||
# end
|
||||
#
|
||||
# Article.import preprocess: :enrich
|
||||
#
|
||||
# @example Return an array of error elements instead of the number of errors, eg.
|
||||
# to try importing these records again
|
||||
#
|
||||
# Article.import return: 'errors'
|
||||
#
|
||||
def import(options={}, &block)
|
||||
errors = []
|
||||
refresh = options.delete(:refresh) || false
|
||||
target_index = options.delete(:index) || index_name
|
||||
target_type = options.delete(:type) || document_type
|
||||
transform = options.delete(:transform) || __transform
|
||||
return_value = options.delete(:return) || 'count'
|
||||
|
||||
unless transform.respond_to?(:call)
|
||||
raise ArgumentError,
|
||||
"Pass an object responding to `call` as the :transform option, #{transform.class} given"
|
||||
end
|
||||
|
||||
if options.delete(:force)
|
||||
self.create_index! force: true, index: target_index
|
||||
elsif !self.index_exists? index: target_index
|
||||
raise ArgumentError,
|
||||
"#{target_index} does not exist to be imported into. Use create_index! or the :force option to create it."
|
||||
end
|
||||
|
||||
__find_in_batches(options) do |batch|
|
||||
response = client.bulk \
|
||||
index: target_index,
|
||||
type: target_type,
|
||||
body: __batch_to_bulk(batch, transform)
|
||||
|
||||
yield response if block_given?
|
||||
|
||||
errors += response['items'].select { |k, v| k.values.first['error'] }
|
||||
end
|
||||
|
||||
self.refresh_index! index: target_index if refresh
|
||||
|
||||
case return_value
|
||||
when 'errors'
|
||||
errors
|
||||
else
|
||||
errors.size
|
||||
end
|
||||
end
|
||||
|
||||
def __batch_to_bulk(batch, transform)
|
||||
batch.map { |model| transform.call(model) }
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
|
@ -1,446 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Provides the necessary support to set up index options (mappings, settings)
|
||||
# as well as instance methods to create, update or delete documents in the index.
|
||||
#
|
||||
# @see ClassMethods#settings
|
||||
# @see ClassMethods#mapping
|
||||
#
|
||||
# @see InstanceMethods#index_document
|
||||
# @see InstanceMethods#update_document
|
||||
# @see InstanceMethods#delete_document
|
||||
#
|
||||
module Indexing
|
||||
|
||||
# Wraps the [index settings](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#configuration-index-settings)
|
||||
#
|
||||
class Settings
|
||||
attr_accessor :settings
|
||||
|
||||
def initialize(settings={})
|
||||
@settings = settings
|
||||
end
|
||||
|
||||
def to_hash
|
||||
@settings
|
||||
end
|
||||
|
||||
def as_json(options={})
|
||||
to_hash
|
||||
end
|
||||
end
|
||||
|
||||
# Wraps the [index mappings](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping.html)
|
||||
#
|
||||
class Mappings
|
||||
attr_accessor :options, :type
|
||||
|
||||
# @private
|
||||
TYPES_WITH_EMBEDDED_PROPERTIES = %w(object nested)
|
||||
|
||||
def initialize(type, options={})
|
||||
raise ArgumentError, "`type` is missing" if type.nil?
|
||||
|
||||
@type = type
|
||||
@options = options
|
||||
@mapping = {}
|
||||
end
|
||||
|
||||
def indexes(name, options={}, &block)
|
||||
@mapping[name] = options
|
||||
|
||||
if block_given?
|
||||
@mapping[name][:type] ||= 'object'
|
||||
properties = TYPES_WITH_EMBEDDED_PROPERTIES.include?(@mapping[name][:type].to_s) ? :properties : :fields
|
||||
|
||||
@mapping[name][properties] ||= {}
|
||||
|
||||
previous = @mapping
|
||||
begin
|
||||
@mapping = @mapping[name][properties]
|
||||
self.instance_eval(&block)
|
||||
ensure
|
||||
@mapping = previous
|
||||
end
|
||||
end
|
||||
|
||||
# Set the type to `text` by default
|
||||
@mapping[name][:type] ||= 'text'
|
||||
|
||||
self
|
||||
end
|
||||
|
||||
def to_hash
|
||||
{ @type.to_sym => @options.merge( properties: @mapping ) }
|
||||
end
|
||||
|
||||
def as_json(options={})
|
||||
to_hash
|
||||
end
|
||||
end
|
||||
|
||||
module ClassMethods
|
||||
|
||||
# Defines mappings for the index
|
||||
#
|
||||
# @example Define mapping for model
|
||||
#
|
||||
# class Article
|
||||
# mapping dynamic: 'strict' do
|
||||
# indexes :foo do
|
||||
# indexes :bar
|
||||
# end
|
||||
# indexes :baz
|
||||
# end
|
||||
# end
|
||||
#
|
||||
# Article.mapping.to_hash
|
||||
#
|
||||
# # => { :article =>
|
||||
# # { :dynamic => "strict",
|
||||
# # :properties=>
|
||||
# # { :foo => {
|
||||
# # :type=>"object",
|
||||
# # :properties => {
|
||||
# # :bar => { :type => "string" }
|
||||
# # }
|
||||
# # }
|
||||
# # },
|
||||
# # :baz => { :type=> "string" }
|
||||
# # }
|
||||
# # }
|
||||
#
|
||||
# @example Define index settings and mappings
|
||||
#
|
||||
# class Article
|
||||
# settings number_of_shards: 1 do
|
||||
# mappings do
|
||||
# indexes :foo
|
||||
# end
|
||||
# end
|
||||
# end
|
||||
#
|
||||
# @example Call the mapping method directly
|
||||
#
|
||||
# Article.mapping(dynamic: 'strict') { indexes :foo, type: 'long' }
|
||||
#
|
||||
# Article.mapping.to_hash
|
||||
#
|
||||
# # => {:article=>{:dynamic=>"strict", :properties=>{:foo=>{:type=>"long"}}}}
|
||||
#
|
||||
# The `mappings` and `settings` methods are accessible directly on the model class,
|
||||
# when it doesn't already define them. Use the `__elasticsearch__` proxy otherwise.
|
||||
#
|
||||
def mapping(options={}, &block)
|
||||
@mapping ||= Mappings.new(document_type, options)
|
||||
|
||||
@mapping.options.update(options) unless options.empty?
|
||||
|
||||
if block_given?
|
||||
@mapping.instance_eval(&block)
|
||||
return self
|
||||
else
|
||||
@mapping
|
||||
end
|
||||
end; alias_method :mappings, :mapping
|
||||
|
||||
# Define settings for the index
|
||||
#
|
||||
# @example Define index settings
|
||||
#
|
||||
# Article.settings(index: { number_of_shards: 1 })
|
||||
#
|
||||
# Article.settings.to_hash
|
||||
#
|
||||
# # => {:index=>{:number_of_shards=>1}}
|
||||
#
|
||||
# You can read settings from any object that responds to :read
|
||||
# as long as its return value can be parsed as either YAML or JSON.
|
||||
#
|
||||
# @example Define index settings from YAML file
|
||||
#
|
||||
# # config/elasticsearch/articles.yml:
|
||||
# #
|
||||
# # index:
|
||||
# # number_of_shards: 1
|
||||
# #
|
||||
#
|
||||
# Article.settings File.open("config/elasticsearch/articles.yml")
|
||||
#
|
||||
# Article.settings.to_hash
|
||||
#
|
||||
# # => { "index" => { "number_of_shards" => 1 } }
|
||||
#
|
||||
#
|
||||
# @example Define index settings from JSON file
|
||||
#
|
||||
# # config/elasticsearch/articles.json:
|
||||
# #
|
||||
# # { "index": { "number_of_shards": 1 } }
|
||||
# #
|
||||
#
|
||||
# Article.settings File.open("config/elasticsearch/articles.json")
|
||||
#
|
||||
# Article.settings.to_hash
|
||||
#
|
||||
# # => { "index" => { "number_of_shards" => 1 } }
|
||||
#
|
||||
def settings(settings={}, &block)
|
||||
settings = YAML.load(settings.read) if settings.respond_to?(:read)
|
||||
@settings ||= Settings.new(settings)
|
||||
|
||||
@settings.settings.update(settings) unless settings.empty?
|
||||
|
||||
if block_given?
|
||||
self.instance_eval(&block)
|
||||
return self
|
||||
else
|
||||
@settings
|
||||
end
|
||||
end
|
||||
|
||||
def load_settings_from_io(settings)
|
||||
YAML.load(settings.read)
|
||||
end
|
||||
|
||||
# Creates an index with correct name, automatically passing
|
||||
# `settings` and `mappings` defined in the model
|
||||
#
|
||||
# @example Create an index for the `Article` model
|
||||
#
|
||||
# Article.__elasticsearch__.create_index!
|
||||
#
|
||||
# @example Forcefully create (delete first) an index for the `Article` model
|
||||
#
|
||||
# Article.__elasticsearch__.create_index! force: true
|
||||
#
|
||||
# @example Pass a specific index name
|
||||
#
|
||||
# Article.__elasticsearch__.create_index! index: 'my-index'
|
||||
#
|
||||
def create_index!(options={})
|
||||
options = options.clone
|
||||
|
||||
target_index = options.delete(:index) || self.index_name
|
||||
settings = options.delete(:settings) || self.settings.to_hash
|
||||
mappings = options.delete(:mappings) || self.mappings.to_hash
|
||||
|
||||
delete_index!(options.merge index: target_index) if options[:force]
|
||||
|
||||
unless index_exists?(index: target_index)
|
||||
self.client.indices.create index: target_index,
|
||||
body: {
|
||||
settings: settings,
|
||||
mappings: mappings }
|
||||
end
|
||||
end
|
||||
|
||||
# Returns true if the index exists
|
||||
#
|
||||
# @example Check whether the model's index exists
|
||||
#
|
||||
# Article.__elasticsearch__.index_exists?
|
||||
#
|
||||
# @example Check whether a specific index exists
|
||||
#
|
||||
# Article.__elasticsearch__.index_exists? index: 'my-index'
|
||||
#
|
||||
def index_exists?(options={})
|
||||
target_index = options[:index] || self.index_name
|
||||
|
||||
self.client.indices.exists(index: target_index) rescue false
|
||||
end
|
||||
|
||||
# Deletes the index with corresponding name
|
||||
#
|
||||
# @example Delete the index for the `Article` model
|
||||
#
|
||||
# Article.__elasticsearch__.delete_index!
|
||||
#
|
||||
# @example Pass a specific index name
|
||||
#
|
||||
# Article.__elasticsearch__.delete_index! index: 'my-index'
|
||||
#
|
||||
def delete_index!(options={})
|
||||
target_index = options.delete(:index) || self.index_name
|
||||
|
||||
begin
|
||||
self.client.indices.delete index: target_index
|
||||
rescue Exception => e
|
||||
if e.class.to_s =~ /NotFound/ && options[:force]
|
||||
client.transport.logger.debug("[!!!] Index does not exist (#{e.class})") if client.transport.logger
|
||||
nil
|
||||
else
|
||||
raise e
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Performs the "refresh" operation for the index (useful e.g. in tests)
|
||||
#
|
||||
# @example Refresh the index for the `Article` model
|
||||
#
|
||||
# Article.__elasticsearch__.refresh_index!
|
||||
#
|
||||
# @example Pass a specific index name
|
||||
#
|
||||
# Article.__elasticsearch__.refresh_index! index: 'my-index'
|
||||
#
|
||||
# @see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-refresh.html
|
||||
#
|
||||
def refresh_index!(options={})
|
||||
target_index = options.delete(:index) || self.index_name
|
||||
|
||||
begin
|
||||
self.client.indices.refresh index: target_index
|
||||
rescue Exception => e
|
||||
if e.class.to_s =~ /NotFound/ && options[:force]
|
||||
client.transport.logger.debug("[!!!] Index does not exist (#{e.class})") if client.transport.logger
|
||||
nil
|
||||
else
|
||||
raise e
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
module InstanceMethods
|
||||
|
||||
def self.included(base)
|
||||
# Register callback for storing changed attributes for models
|
||||
# which implement `before_save` and return changed attributes
|
||||
# (ie. when `Elasticsearch::Model` is included)
|
||||
#
|
||||
# @note This is typically triggered only when the module would be
|
||||
# included in the model directly, not within the proxy.
|
||||
#
|
||||
# @see #update_document
|
||||
#
|
||||
base.before_save do |i|
|
||||
if i.class.instance_methods.include?(:changes_to_save) # Rails 5.1
|
||||
i.instance_variable_set(:@__changed_model_attributes,
|
||||
Hash[ i.changes_to_save.map { |key, value| [key, value.last] } ])
|
||||
elsif i.class.instance_methods.include?(:changes)
|
||||
i.instance_variable_set(:@__changed_model_attributes,
|
||||
Hash[ i.changes.map { |key, value| [key, value.last] } ])
|
||||
end
|
||||
end if base.respond_to?(:before_save)
|
||||
end
|
||||
|
||||
# Serializes the model instance into JSON (by calling `as_indexed_json`),
|
||||
# and saves the document into the Elasticsearch index.
|
||||
#
|
||||
# @param options [Hash] Optional arguments for passing to the client
|
||||
#
|
||||
# @example Index a record
|
||||
#
|
||||
# @article.__elasticsearch__.index_document
|
||||
# 2013-11-20 16:25:57 +0100: PUT http://localhost:9200/articles/article/1 ...
|
||||
#
|
||||
# @return [Hash] The response from Elasticsearch
|
||||
#
|
||||
# @see http://rubydoc.info/gems/elasticsearch-api/Elasticsearch/API/Actions:index
|
||||
#
|
||||
def index_document(options={})
|
||||
document = self.as_indexed_json
|
||||
|
||||
client.index(
|
||||
{ index: index_name,
|
||||
type: document_type,
|
||||
id: self.id,
|
||||
body: document }.merge(options)
|
||||
)
|
||||
end
|
||||
|
||||
# Deletes the model instance from the index
|
||||
#
|
||||
# @param options [Hash] Optional arguments for passing to the client
|
||||
#
|
||||
# @example Delete a record
|
||||
#
|
||||
# @article.__elasticsearch__.delete_document
|
||||
# 2013-11-20 16:27:00 +0100: DELETE http://localhost:9200/articles/article/1
|
||||
#
|
||||
# @return [Hash] The response from Elasticsearch
|
||||
#
|
||||
# @see http://rubydoc.info/gems/elasticsearch-api/Elasticsearch/API/Actions:delete
|
||||
#
|
||||
def delete_document(options={})
|
||||
client.delete(
|
||||
{ index: index_name,
|
||||
type: document_type,
|
||||
id: self.id }.merge(options)
|
||||
)
|
||||
end
|
||||
|
||||
# Tries to gather the changed attributes of a model instance
|
||||
# (via [ActiveModel::Dirty](http://api.rubyonrails.org/classes/ActiveModel/Dirty.html)),
|
||||
# performing a _partial_ update of the document.
|
||||
#
|
||||
# When the changed attributes are not available, performs full re-index of the record.
|
||||
#
|
||||
# See the {#update_document_attributes} method for updating specific attributes directly.
|
||||
#
|
||||
# @param options [Hash] Optional arguments for passing to the client
|
||||
#
|
||||
# @example Update a document corresponding to the record
|
||||
#
|
||||
# @article = Article.first
|
||||
# @article.update_attribute :title, 'Updated'
|
||||
# # SQL (0.3ms) UPDATE "articles" SET "title" = ?...
|
||||
#
|
||||
# @article.__elasticsearch__.update_document
|
||||
# # 2013-11-20 17:00:05 +0100: POST http://localhost:9200/articles/article/1/_update ...
|
||||
# # 2013-11-20 17:00:05 +0100: > {"doc":{"title":"Updated"}}
|
||||
#
|
||||
# @return [Hash] The response from Elasticsearch
|
||||
#
|
||||
# @see http://rubydoc.info/gems/elasticsearch-api/Elasticsearch/API/Actions:update
|
||||
#
|
||||
def update_document(options={})
|
||||
if attributes_in_database = self.instance_variable_get(:@__changed_model_attributes).presence
|
||||
attributes = if respond_to?(:as_indexed_json)
|
||||
self.as_indexed_json.select { |k,v| attributes_in_database.keys.map(&:to_s).include? k.to_s }
|
||||
else
|
||||
attributes_in_database
|
||||
end
|
||||
|
||||
client.update(
|
||||
{ index: index_name,
|
||||
type: document_type,
|
||||
id: self.id,
|
||||
body: { doc: attributes } }.merge(options)
|
||||
) unless attributes.empty?
|
||||
else
|
||||
index_document(options)
|
||||
end
|
||||
end
|
||||
|
||||
# Perform a _partial_ update of specific document attributes
|
||||
# (without consideration for changed attributes as in {#update_document})
|
||||
#
|
||||
# @param attributes [Hash] Attributes to be updated
|
||||
# @param options [Hash] Optional arguments for passing to the client
|
||||
#
|
||||
# @example Update the `title` attribute
|
||||
#
|
||||
# @article = Article.first
|
||||
# @article.title = "New title"
|
||||
# @article.__elasticsearch__.update_document_attributes title: "New title"
|
||||
#
|
||||
# @return [Hash] The response from Elasticsearch
|
||||
#
|
||||
def update_document_attributes(attributes, options={})
|
||||
client.update(
|
||||
{ index: index_name,
|
||||
type: document_type,
|
||||
id: self.id,
|
||||
body: { doc: attributes } }.merge(options)
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,83 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Keeps a global registry of classes that include `Elasticsearch::Model`
|
||||
#
|
||||
class Registry
|
||||
def initialize
|
||||
@models = []
|
||||
end
|
||||
|
||||
# Returns the unique instance of the registry (Singleton)
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def self.__instance
|
||||
@instance ||= new
|
||||
end
|
||||
|
||||
# Adds a model to the registry
|
||||
#
|
||||
def self.add(klass)
|
||||
__instance.add(klass)
|
||||
end
|
||||
|
||||
# Returns an Array of registered models
|
||||
#
|
||||
def self.all
|
||||
__instance.models
|
||||
end
|
||||
|
||||
# Adds a model to the registry
|
||||
#
|
||||
def add(klass)
|
||||
@models << klass
|
||||
end
|
||||
|
||||
# Returns a copy of the registered models
|
||||
#
|
||||
def models
|
||||
@models.dup
|
||||
end
|
||||
end
|
||||
|
||||
# Wraps a collection of models when querying multiple indices
|
||||
#
|
||||
# @see Elasticsearch::Model.search
|
||||
#
|
||||
class Multimodel
|
||||
attr_reader :models
|
||||
|
||||
# @param models [Class] The list of models across which the search will be performed
|
||||
#
|
||||
def initialize(*models)
|
||||
@models = models.flatten
|
||||
@models = Model::Registry.all if @models.empty?
|
||||
end
|
||||
|
||||
# Get an Array of index names used for retrieving documents when doing a search across multiple models
|
||||
#
|
||||
# @return [Array] the list of index names used for retrieving documents
|
||||
#
|
||||
def index_name
|
||||
models.map { |m| m.index_name }
|
||||
end
|
||||
|
||||
# Get an Array of document types used for retrieving documents when doing a search across multiple models
|
||||
#
|
||||
# @return [Array] the list of document types used for retrieving documents
|
||||
#
|
||||
def document_type
|
||||
models.map { |m| m.document_type }
|
||||
end
|
||||
|
||||
# Get the client common for all models
|
||||
#
|
||||
# @return Elasticsearch::Transport::Client
|
||||
#
|
||||
def client
|
||||
Elasticsearch::Model.client
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,153 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Provides methods for getting and setting index name and document type for the model
|
||||
#
|
||||
module Naming
|
||||
|
||||
DEFAULT_DOC_TYPE = '_doc'.freeze
|
||||
|
||||
module ClassMethods
|
||||
|
||||
# Get or set the name of the index
|
||||
#
|
||||
# @example Set the index name for the `Article` model
|
||||
#
|
||||
# class Article
|
||||
# index_name "articles-#{Rails.env}"
|
||||
# end
|
||||
#
|
||||
# @example Set the index name for the `Article` model and re-evaluate it on each call
|
||||
#
|
||||
# class Article
|
||||
# index_name { "articles-#{Time.now.year}" }
|
||||
# end
|
||||
#
|
||||
# @example Directly set the index name for the `Article` model
|
||||
#
|
||||
# Article.index_name "articles-#{Rails.env}"
|
||||
#
|
||||
#
|
||||
def index_name name=nil, &block
|
||||
if name || block_given?
|
||||
return (@index_name = name || block)
|
||||
end
|
||||
|
||||
if @index_name.respond_to?(:call)
|
||||
@index_name.call
|
||||
else
|
||||
@index_name || implicit(:index_name)
|
||||
end
|
||||
end
|
||||
|
||||
# Set the index name
|
||||
#
|
||||
# @see index_name
|
||||
def index_name=(name)
|
||||
@index_name = name
|
||||
end
|
||||
|
||||
# Get or set the document type
|
||||
#
|
||||
# @example Set the document type for the `Article` model
|
||||
#
|
||||
# class Article
|
||||
# document_type "my-article"
|
||||
# end
|
||||
#
|
||||
# @example Directly set the document type for the `Article` model
|
||||
#
|
||||
# Article.document_type "my-article"
|
||||
#
|
||||
def document_type name=nil
|
||||
@document_type = name || @document_type || implicit(:document_type)
|
||||
end
|
||||
|
||||
|
||||
# Set the document type
|
||||
#
|
||||
# @see document_type
|
||||
#
|
||||
def document_type=(name)
|
||||
@document_type = name
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
def implicit(prop)
|
||||
value = nil
|
||||
|
||||
if Elasticsearch::Model.settings[:inheritance_enabled]
|
||||
self.ancestors.each do |klass|
|
||||
# When Naming is included in Proxy::ClassMethods the actual model
|
||||
# is among its ancestors. We don't want to call the actual model
|
||||
# since it will result in the same call to the same instance of
|
||||
# Proxy::ClassMethods. To prevent this we also skip the ancestor
|
||||
# that is the target.
|
||||
next if klass == self || self.respond_to?(:target) && klass == self.target
|
||||
break if value = klass.respond_to?(prop) && klass.send(prop)
|
||||
end
|
||||
end
|
||||
|
||||
value || self.send("default_#{prop}")
|
||||
end
|
||||
|
||||
def default_index_name
|
||||
self.model_name.collection.gsub(/\//, '-')
|
||||
end
|
||||
|
||||
def default_document_type
|
||||
DEFAULT_DOC_TYPE
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
module InstanceMethods
|
||||
|
||||
# Get or set the index name for the model instance
|
||||
#
|
||||
# @example Set the index name for an instance of the `Article` model
|
||||
#
|
||||
# @article.index_name "articles-#{@article.user_id}"
|
||||
# @article.__elasticsearch__.update_document
|
||||
#
|
||||
def index_name name=nil, &block
|
||||
if name || block_given?
|
||||
return (@index_name = name || block)
|
||||
end
|
||||
|
||||
if @index_name.respond_to?(:call)
|
||||
@index_name.call
|
||||
else
|
||||
@index_name || self.class.index_name
|
||||
end
|
||||
end
|
||||
|
||||
# Set the index name
|
||||
#
|
||||
# @see index_name
|
||||
def index_name=(name)
|
||||
@index_name = name
|
||||
end
|
||||
|
||||
# @example Set the document type for an instance of the `Article` model
|
||||
#
|
||||
# @article.document_type "my-article"
|
||||
# @article.__elasticsearch__.update_document
|
||||
#
|
||||
def document_type name=nil
|
||||
@document_type = name || @document_type || self.class.document_type
|
||||
end
|
||||
|
||||
# Set the document type
|
||||
#
|
||||
# @see document_type
|
||||
#
|
||||
def document_type=(name)
|
||||
@document_type = name
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,143 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# This module provides a proxy interfacing between the including class and
|
||||
# {Elasticsearch::Model}, preventing the pollution of the including class namespace.
|
||||
#
|
||||
# The only "gateway" between the model and Elasticsearch::Model is the
|
||||
# `__elasticsearch__` class and instance method.
|
||||
#
|
||||
# The including class must be compatible with
|
||||
# [ActiveModel](https://github.com/rails/rails/tree/master/activemodel).
|
||||
#
|
||||
# @example Include the {Elasticsearch::Model} module into an `Article` model
|
||||
#
|
||||
# class Article < ActiveRecord::Base
|
||||
# include Elasticsearch::Model
|
||||
# end
|
||||
#
|
||||
# Article.__elasticsearch__.respond_to?(:search)
|
||||
# # => true
|
||||
#
|
||||
# article = Article.first
|
||||
#
|
||||
# article.respond_to? :index_document
|
||||
# # => false
|
||||
#
|
||||
# article.__elasticsearch__.respond_to?(:index_document)
|
||||
# # => true
|
||||
#
|
||||
module Proxy
|
||||
|
||||
# Define the `__elasticsearch__` class and instance methods in the including class
|
||||
# and register a callback for intercepting changes in the model.
|
||||
#
|
||||
# @note The callback is triggered only when `Elasticsearch::Model` is included in the
|
||||
# module and the functionality is accessible via the proxy.
|
||||
#
|
||||
def self.included(base)
|
||||
base.class_eval do
|
||||
# {ClassMethodsProxy} instance, accessed as `MyModel.__elasticsearch__`
|
||||
#
|
||||
def self.__elasticsearch__ &block
|
||||
@__elasticsearch__ ||= ClassMethodsProxy.new(self)
|
||||
@__elasticsearch__.instance_eval(&block) if block_given?
|
||||
@__elasticsearch__
|
||||
end
|
||||
|
||||
# {InstanceMethodsProxy}, accessed as `@mymodel.__elasticsearch__`
|
||||
#
|
||||
def __elasticsearch__ &block
|
||||
@__elasticsearch__ ||= InstanceMethodsProxy.new(self)
|
||||
@__elasticsearch__.instance_eval(&block) if block_given?
|
||||
@__elasticsearch__
|
||||
end
|
||||
|
||||
# Register a callback for storing changed attributes for models which implement
|
||||
# `before_save` method and return changed attributes (ie. when `Elasticsearch::Model` is included)
|
||||
#
|
||||
# @see http://api.rubyonrails.org/classes/ActiveModel/Dirty.html
|
||||
#
|
||||
before_save do |i|
|
||||
if i.class.instance_methods.include?(:changes_to_save) # Rails 5.1
|
||||
a = i.__elasticsearch__.instance_variable_get(:@__changed_model_attributes) || {}
|
||||
i.__elasticsearch__.instance_variable_set(:@__changed_model_attributes,
|
||||
a.merge(Hash[ i.changes_to_save.map { |key, value| [key, value.last] } ]))
|
||||
elsif i.class.instance_methods.include?(:changes)
|
||||
a = i.__elasticsearch__.instance_variable_get(:@__changed_model_attributes) || {}
|
||||
i.__elasticsearch__.instance_variable_set(:@__changed_model_attributes,
|
||||
a.merge(Hash[ i.changes.map { |key, value| [key, value.last] } ]))
|
||||
end
|
||||
end if respond_to?(:before_save)
|
||||
end
|
||||
end
|
||||
|
||||
# @overload dup
|
||||
#
|
||||
# Returns a copy of this object. Resets the __elasticsearch__ proxy so
|
||||
# the duplicate will build its own proxy.
|
||||
def initialize_dup(_)
|
||||
@__elasticsearch__ = nil
|
||||
super
|
||||
end
|
||||
|
||||
# Common module for the proxy classes
|
||||
#
|
||||
module Base
|
||||
attr_reader :target
|
||||
|
||||
def initialize(target)
|
||||
@target = target
|
||||
end
|
||||
|
||||
# Delegate methods to `@target`
|
||||
#
|
||||
def method_missing(method_name, *arguments, &block)
|
||||
target.respond_to?(method_name) ? target.__send__(method_name, *arguments, &block) : super
|
||||
end
|
||||
|
||||
# Respond to methods from `@target`
|
||||
#
|
||||
def respond_to?(method_name, include_private = false)
|
||||
target.respond_to?(method_name) || super
|
||||
end
|
||||
|
||||
def inspect
|
||||
"[PROXY] #{target.inspect}"
|
||||
end
|
||||
end
|
||||
|
||||
# A proxy interfacing between Elasticsearch::Model class methods and model class methods
|
||||
#
|
||||
# TODO: Inherit from BasicObject and make Pry's `ls` command behave?
|
||||
#
|
||||
class ClassMethodsProxy
|
||||
include Base
|
||||
end
|
||||
|
||||
# A proxy interfacing between Elasticsearch::Model instance methods and model instance methods
|
||||
#
|
||||
# TODO: Inherit from BasicObject and make Pry's `ls` command behave?
|
||||
#
|
||||
class InstanceMethodsProxy
|
||||
include Base
|
||||
|
||||
def klass
|
||||
target.class
|
||||
end
|
||||
|
||||
def class
|
||||
klass.__elasticsearch__
|
||||
end
|
||||
|
||||
# Need to redefine `as_json` because we're not inheriting from `BasicObject`;
|
||||
# see TODO note above.
|
||||
#
|
||||
def as_json(options={})
|
||||
target.as_json(options)
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,84 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Contains modules and classes for wrapping the response from Elasticsearch
|
||||
#
|
||||
module Response
|
||||
|
||||
# Encapsulate the response returned from the Elasticsearch client
|
||||
#
|
||||
# Implements Enumerable and forwards its methods to the {#results} object.
|
||||
#
|
||||
class Response
|
||||
attr_reader :klass, :search
|
||||
|
||||
include Enumerable
|
||||
|
||||
delegate :each, :empty?, :size, :slice, :[], :to_ary, to: :results
|
||||
|
||||
def initialize(klass, search, options={})
|
||||
@klass = klass
|
||||
@search = search
|
||||
end
|
||||
|
||||
# Returns the Elasticsearch response
|
||||
#
|
||||
# @return [Hash]
|
||||
#
|
||||
def response
|
||||
@response ||= HashWrapper.new(search.execute!)
|
||||
end
|
||||
|
||||
# Returns the collection of "hits" from Elasticsearch
|
||||
#
|
||||
# @return [Results]
|
||||
#
|
||||
def results
|
||||
@results ||= Results.new(klass, self)
|
||||
end
|
||||
|
||||
# Returns the collection of records from the database
|
||||
#
|
||||
# @return [Records]
|
||||
#
|
||||
def records(options = {})
|
||||
@records ||= Records.new(klass, self, options)
|
||||
end
|
||||
|
||||
# Returns the "took" time
|
||||
#
|
||||
def took
|
||||
raw_response['took']
|
||||
end
|
||||
|
||||
# Returns whether the response timed out
|
||||
#
|
||||
def timed_out
|
||||
raw_response['timed_out']
|
||||
end
|
||||
|
||||
# Returns the statistics on shards
|
||||
#
|
||||
def shards
|
||||
@shards ||= response['_shards']
|
||||
end
|
||||
|
||||
# Returns a Hashie::Mash of the aggregations
|
||||
#
|
||||
def aggregations
|
||||
@aggregations ||= Aggregations.new(raw_response['aggregations'])
|
||||
end
|
||||
|
||||
# Returns a Hashie::Mash of the suggestions
|
||||
#
|
||||
def suggestions
|
||||
@suggestions ||= Suggestions.new(raw_response['suggest'])
|
||||
end
|
||||
|
||||
def raw_response
|
||||
@raw_response ||= @response ? @response.to_hash : search.execute!
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,38 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
|
||||
class Aggregations < HashWrapper
|
||||
disable_warnings if respond_to?(:disable_warnings)
|
||||
|
||||
def initialize(attributes={})
|
||||
__redefine_enumerable_methods super(attributes)
|
||||
end
|
||||
|
||||
# Fix the problem of Hashie::Mash returning unexpected values for `min` and `max` methods
|
||||
#
|
||||
# People can define names for aggregations such as `min` and `max`, but these
|
||||
# methods are defined in `Enumerable#min` and `Enumerable#max`
|
||||
#
|
||||
# { foo: 'bar' }.min
|
||||
# # => [:foo, "bar"]
|
||||
#
|
||||
# Therefore, any Hashie::Mash instance value has the `min` and `max`
|
||||
# methods redefined to return the real value
|
||||
#
|
||||
def __redefine_enumerable_methods(h)
|
||||
if h.respond_to?(:each_pair)
|
||||
h.each_pair { |k, v| v = __redefine_enumerable_methods(v) }
|
||||
end
|
||||
if h.is_a?(Hashie::Mash)
|
||||
class << h
|
||||
define_method(:min) { self[:min] }
|
||||
define_method(:max) { self[:max] }
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,45 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
# Common funtionality for classes in the {Elasticsearch::Model::Response} module
|
||||
#
|
||||
module Base
|
||||
attr_reader :klass, :response, :raw_response
|
||||
|
||||
# @param klass [Class] The name of the model class
|
||||
# @param response [Hash] The full response returned from Elasticsearch client
|
||||
# @param options [Hash] Optional parameters
|
||||
#
|
||||
def initialize(klass, response, options={})
|
||||
@klass = klass
|
||||
@raw_response = response
|
||||
@response = response
|
||||
end
|
||||
|
||||
# @abstract Implement this method in specific class
|
||||
#
|
||||
def results
|
||||
raise NotImplemented, "Implement this method in #{klass}"
|
||||
end
|
||||
|
||||
# @abstract Implement this method in specific class
|
||||
#
|
||||
def records
|
||||
raise NotImplemented, "Implement this method in #{klass}"
|
||||
end
|
||||
|
||||
# Returns the total number of hits
|
||||
#
|
||||
def total
|
||||
response.response['hits']['total']
|
||||
end
|
||||
|
||||
# Returns the max_score
|
||||
#
|
||||
def max_score
|
||||
response.response['hits']['max_score']
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,2 +0,0 @@
|
|||
require 'elasticsearch/model/response/pagination/kaminari'
|
||||
require 'elasticsearch/model/response/pagination/will_paginate'
|
|
@ -1,109 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
|
||||
# Pagination for search results/records
|
||||
#
|
||||
module Pagination
|
||||
# Allow models to be paginated with the "kaminari" gem [https://github.com/amatsuda/kaminari]
|
||||
#
|
||||
module Kaminari
|
||||
def self.included(base)
|
||||
# Include the Kaminari configuration and paging method in response
|
||||
#
|
||||
base.__send__ :include, ::Kaminari::ConfigurationMethods::ClassMethods
|
||||
base.__send__ :include, ::Kaminari::PageScopeMethods
|
||||
|
||||
# Include the Kaminari paging methods in results and records
|
||||
#
|
||||
Elasticsearch::Model::Response::Results.__send__ :include, ::Kaminari::ConfigurationMethods::ClassMethods
|
||||
Elasticsearch::Model::Response::Results.__send__ :include, ::Kaminari::PageScopeMethods
|
||||
Elasticsearch::Model::Response::Records.__send__ :include, ::Kaminari::PageScopeMethods
|
||||
|
||||
Elasticsearch::Model::Response::Results.__send__ :delegate, :limit_value, :offset_value, :total_count, :max_pages, to: :response
|
||||
Elasticsearch::Model::Response::Records.__send__ :delegate, :limit_value, :offset_value, :total_count, :max_pages, to: :response
|
||||
|
||||
base.class_eval <<-RUBY, __FILE__, __LINE__ + 1
|
||||
# Define the `page` Kaminari method
|
||||
#
|
||||
def #{::Kaminari.config.page_method_name}(num=nil)
|
||||
@results = nil
|
||||
@records = nil
|
||||
@response = nil
|
||||
@page = [num.to_i, 1].max
|
||||
@per_page ||= __default_per_page
|
||||
|
||||
self.search.definition.update size: @per_page,
|
||||
from: @per_page * (@page - 1)
|
||||
|
||||
self
|
||||
end
|
||||
RUBY
|
||||
end
|
||||
|
||||
# Returns the current "limit" (`size`) value
|
||||
#
|
||||
def limit_value
|
||||
case
|
||||
when search.definition[:size]
|
||||
search.definition[:size]
|
||||
else
|
||||
__default_per_page
|
||||
end
|
||||
end
|
||||
|
||||
# Returns the current "offset" (`from`) value
|
||||
#
|
||||
def offset_value
|
||||
case
|
||||
when search.definition[:from]
|
||||
search.definition[:from]
|
||||
else
|
||||
0
|
||||
end
|
||||
end
|
||||
|
||||
# Set the "limit" (`size`) value
|
||||
#
|
||||
def limit(value)
|
||||
return self if value.to_i <= 0
|
||||
@results = nil
|
||||
@records = nil
|
||||
@response = nil
|
||||
@per_page = value.to_i
|
||||
|
||||
search.definition.update :size => @per_page
|
||||
search.definition.update :from => @per_page * (@page - 1) if @page
|
||||
self
|
||||
end
|
||||
|
||||
# Set the "offset" (`from`) value
|
||||
#
|
||||
def offset(value)
|
||||
return self if value.to_i < 0
|
||||
@results = nil
|
||||
@records = nil
|
||||
@response = nil
|
||||
@page = nil
|
||||
search.definition.update :from => value.to_i
|
||||
self
|
||||
end
|
||||
|
||||
# Returns the total number of results
|
||||
#
|
||||
def total_count
|
||||
results.total
|
||||
end
|
||||
|
||||
# Returns the models's `per_page` value or the default
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def __default_per_page
|
||||
klass.respond_to?(:default_per_page) && klass.default_per_page || ::Kaminari.config.default_per_page
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,95 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
|
||||
# Pagination for search results/records
|
||||
#
|
||||
module Pagination
|
||||
|
||||
|
||||
# Allow models to be paginated with the "will_paginate" gem [https://github.com/mislav/will_paginate]
|
||||
#
|
||||
module WillPaginate
|
||||
def self.included(base)
|
||||
base.__send__ :include, ::WillPaginate::CollectionMethods
|
||||
|
||||
# Include the paging methods in results and records
|
||||
#
|
||||
methods = [:current_page, :offset, :length, :per_page, :total_entries, :total_pages, :previous_page, :next_page, :out_of_bounds?]
|
||||
Elasticsearch::Model::Response::Results.__send__ :delegate, *methods, to: :response
|
||||
Elasticsearch::Model::Response::Records.__send__ :delegate, *methods, to: :response
|
||||
end
|
||||
|
||||
def offset
|
||||
(current_page - 1) * per_page
|
||||
end
|
||||
|
||||
def length
|
||||
search.definition[:size]
|
||||
end
|
||||
|
||||
# Main pagination method
|
||||
#
|
||||
# @example
|
||||
#
|
||||
# Article.search('foo').paginate(page: 1, per_page: 30)
|
||||
#
|
||||
def paginate(options)
|
||||
param_name = options[:param_name] || :page
|
||||
page = [options[param_name].to_i, 1].max
|
||||
per_page = (options[:per_page] || __default_per_page).to_i
|
||||
|
||||
search.definition.update size: per_page,
|
||||
from: (page - 1) * per_page
|
||||
self
|
||||
end
|
||||
|
||||
# Return the current page
|
||||
#
|
||||
def current_page
|
||||
search.definition[:from] / per_page + 1 if search.definition[:from] && per_page
|
||||
end
|
||||
|
||||
# Pagination method
|
||||
#
|
||||
# @example
|
||||
#
|
||||
# Article.search('foo').page(2)
|
||||
#
|
||||
def page(num)
|
||||
paginate(page: num, per_page: per_page) # shorthand
|
||||
end
|
||||
|
||||
# Return or set the "size" value
|
||||
#
|
||||
# @example
|
||||
#
|
||||
# Article.search('foo').per_page(15).page(2)
|
||||
#
|
||||
def per_page(num = nil)
|
||||
if num.nil?
|
||||
search.definition[:size]
|
||||
else
|
||||
paginate(page: current_page, per_page: num) # shorthand
|
||||
end
|
||||
end
|
||||
|
||||
# Returns the total number of results
|
||||
#
|
||||
def total_entries
|
||||
results.total
|
||||
end
|
||||
|
||||
# Returns the models's `per_page` value or the default
|
||||
#
|
||||
# @api private
|
||||
#
|
||||
def __default_per_page
|
||||
klass.respond_to?(:per_page) && klass.per_page || ::WillPaginate.per_page
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,73 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
|
||||
# Encapsulates the collection of records returned from the database
|
||||
#
|
||||
# Implements Enumerable and forwards its methods to the {#records} object,
|
||||
# which is provided by an {Elasticsearch::Model::Adapter::Adapter} implementation.
|
||||
#
|
||||
class Records
|
||||
include Enumerable
|
||||
|
||||
delegate :each, :empty?, :size, :slice, :[], :to_a, :to_ary, to: :records
|
||||
|
||||
attr_accessor :options
|
||||
|
||||
include Base
|
||||
|
||||
# @see Base#initialize
|
||||
#
|
||||
def initialize(klass, response, options={})
|
||||
super
|
||||
|
||||
# Include module provided by the adapter in the singleton class ("metaclass")
|
||||
#
|
||||
adapter = Adapter.from_class(klass)
|
||||
metaclass = class << self; self; end
|
||||
metaclass.__send__ :include, adapter.records_mixin
|
||||
|
||||
self.options = options
|
||||
self
|
||||
end
|
||||
|
||||
# Returns the hit IDs
|
||||
#
|
||||
def ids
|
||||
response.response['hits']['hits'].map { |hit| hit['_id'] }
|
||||
end
|
||||
|
||||
# Returns the {Results} collection
|
||||
#
|
||||
def results
|
||||
response.results
|
||||
end
|
||||
|
||||
# Yields [record, hit] pairs to the block
|
||||
#
|
||||
def each_with_hit(&block)
|
||||
records.to_a.zip(results).each(&block)
|
||||
end
|
||||
|
||||
# Yields [record, hit] pairs and returns the result
|
||||
#
|
||||
def map_with_hit(&block)
|
||||
records.to_a.zip(results).map(&block)
|
||||
end
|
||||
|
||||
# Delegate methods to `@records`
|
||||
#
|
||||
def method_missing(method_name, *arguments)
|
||||
records.respond_to?(method_name) ? records.__send__(method_name, *arguments) : super
|
||||
end
|
||||
|
||||
# Respond to methods from `@records`
|
||||
#
|
||||
def respond_to?(method_name, include_private = false)
|
||||
records.respond_to?(method_name) || super
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,63 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
|
||||
# Encapsulates the "hit" returned from the Elasticsearch client
|
||||
#
|
||||
# Wraps the raw Hash with in a `Hashie::Mash` instance, providing
|
||||
# access to the Hash properties by calling Ruby methods.
|
||||
#
|
||||
# @see https://github.com/intridea/hashie
|
||||
#
|
||||
class Result
|
||||
|
||||
# @param attributes [Hash] A Hash with document properties
|
||||
#
|
||||
def initialize(attributes={})
|
||||
@result = HashWrapper.new(attributes)
|
||||
end
|
||||
|
||||
# Return document `_id` as `id`
|
||||
#
|
||||
def id
|
||||
@result['_id']
|
||||
end
|
||||
|
||||
# Return document `_type` as `_type`
|
||||
#
|
||||
def type
|
||||
@result['_type']
|
||||
end
|
||||
|
||||
# Delegate methods to `@result` or `@result._source`
|
||||
#
|
||||
def method_missing(name, *arguments)
|
||||
case
|
||||
when name.to_s.end_with?('?')
|
||||
@result.__send__(name, *arguments) || ( @result._source && @result._source.__send__(name, *arguments) )
|
||||
when @result.respond_to?(name)
|
||||
@result.__send__ name, *arguments
|
||||
when @result._source && @result._source.respond_to?(name)
|
||||
@result._source.__send__ name, *arguments
|
||||
else
|
||||
super
|
||||
end
|
||||
end
|
||||
|
||||
# Respond to methods from `@result` or `@result._source`
|
||||
#
|
||||
def respond_to_missing?(method_name, include_private = false)
|
||||
@result.respond_to?(method_name.to_sym) || \
|
||||
@result._source && @result._source.respond_to?(method_name.to_sym) || \
|
||||
super
|
||||
end
|
||||
|
||||
def as_json(options={})
|
||||
@result.as_json(options)
|
||||
end
|
||||
|
||||
# TODO: #to_s, #inspect, with support for Pry
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,31 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
|
||||
# Encapsulates the collection of documents returned from Elasticsearch
|
||||
#
|
||||
# Implements Enumerable and forwards its methods to the {#results} object.
|
||||
#
|
||||
class Results
|
||||
include Base
|
||||
include Enumerable
|
||||
|
||||
delegate :each, :empty?, :size, :slice, :[], :to_a, :to_ary, to: :results
|
||||
|
||||
# @see Base#initialize
|
||||
#
|
||||
def initialize(klass, response, options={})
|
||||
super
|
||||
end
|
||||
|
||||
# Returns the {Results} collection
|
||||
#
|
||||
def results
|
||||
# TODO: Configurable custom wrapper
|
||||
response.response['hits']['hits'].map { |hit| Result.new(hit) }
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,15 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
module Response
|
||||
|
||||
class Suggestions < HashWrapper
|
||||
disable_warnings if respond_to?(:disable_warnings)
|
||||
|
||||
def terms
|
||||
self.to_a.map { |k,v| v.first['options'] }.flatten.map {|v| v['text']}.uniq
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,109 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Contains functionality related to searching.
|
||||
#
|
||||
module Searching
|
||||
|
||||
# Wraps a search request definition
|
||||
#
|
||||
class SearchRequest
|
||||
attr_reader :klass, :definition, :options
|
||||
|
||||
# @param klass [Class] The class of the model
|
||||
# @param query_or_payload [String,Hash,Object] The search request definition
|
||||
# (string, JSON, Hash, or object responding to `to_hash`)
|
||||
# @param options [Hash] Optional parameters to be passed to the Elasticsearch client
|
||||
#
|
||||
def initialize(klass, query_or_payload, options={})
|
||||
@klass = klass
|
||||
@options = options
|
||||
|
||||
__index_name = options[:index] || klass.index_name
|
||||
__document_type = options[:type] || klass.document_type
|
||||
|
||||
case
|
||||
# search query: ...
|
||||
when query_or_payload.respond_to?(:to_hash)
|
||||
body = query_or_payload.to_hash
|
||||
|
||||
# search '{ "query" : ... }'
|
||||
when query_or_payload.is_a?(String) && query_or_payload =~ /^\s*{/
|
||||
body = query_or_payload
|
||||
|
||||
# search '...'
|
||||
else
|
||||
q = query_or_payload
|
||||
end
|
||||
|
||||
if body
|
||||
@definition = { index: __index_name, type: __document_type, body: body }.update options
|
||||
else
|
||||
@definition = { index: __index_name, type: __document_type, q: q }.update options
|
||||
end
|
||||
end
|
||||
|
||||
# Performs the request and returns the response from client
|
||||
#
|
||||
# @return [Hash] The response from Elasticsearch
|
||||
#
|
||||
def execute!
|
||||
klass.client.search(@definition)
|
||||
end
|
||||
end
|
||||
|
||||
module ClassMethods
|
||||
|
||||
# Provides a `search` method for the model to easily search within an index/type
|
||||
# corresponding to the model settings.
|
||||
#
|
||||
# @param query_or_payload [String,Hash,Object] The search request definition
|
||||
# (string, JSON, Hash, or object responding to `to_hash`)
|
||||
# @param options [Hash] Optional parameters to be passed to the Elasticsearch client
|
||||
#
|
||||
# @return [Elasticsearch::Model::Response::Response]
|
||||
#
|
||||
# @example Simple search in `Article`
|
||||
#
|
||||
# Article.search 'foo'
|
||||
#
|
||||
# @example Search using a search definition as a Hash
|
||||
#
|
||||
# response = Article.search \
|
||||
# query: {
|
||||
# match: {
|
||||
# title: 'foo'
|
||||
# }
|
||||
# },
|
||||
# highlight: {
|
||||
# fields: {
|
||||
# title: {}
|
||||
# }
|
||||
# },
|
||||
# size: 50
|
||||
#
|
||||
# response.results.first.title
|
||||
# # => "Foo"
|
||||
#
|
||||
# response.results.first.highlight.title
|
||||
# # => ["<em>Foo</em>"]
|
||||
#
|
||||
# response.records.first.title
|
||||
# # Article Load (0.2ms) SELECT "articles".* FROM "articles" WHERE "articles"."id" IN (1, 3)
|
||||
# # => "Foo"
|
||||
#
|
||||
# @example Search using a search definition as a JSON string
|
||||
#
|
||||
# Article.search '{"query" : { "match_all" : {} }}'
|
||||
#
|
||||
def search(query_or_payload, options={})
|
||||
search = SearchRequest.new(self, query_or_payload, options)
|
||||
|
||||
Response::Response.new(self, search)
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,35 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
|
||||
# Contains functionality for serializing model instances for the client
|
||||
#
|
||||
module Serializing
|
||||
|
||||
module ClassMethods
|
||||
end
|
||||
|
||||
module InstanceMethods
|
||||
|
||||
# Serialize the record as a Hash, to be passed to the client.
|
||||
#
|
||||
# Re-define this method to customize the serialization.
|
||||
#
|
||||
# @return [Hash]
|
||||
#
|
||||
# @example Return the model instance as a Hash
|
||||
#
|
||||
# Article.first.__elasticsearch__.as_indexed_json
|
||||
# => {"title"=>"Foo"}
|
||||
#
|
||||
# @see Elasticsearch::Model::Indexing
|
||||
#
|
||||
def as_indexed_json(options={})
|
||||
# TODO: Play with the `MyModel.indexes` method -- reject non-mapped attributes, `:as` options, etc
|
||||
self.as_json(options.merge root: false)
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,5 +0,0 @@
|
|||
module Elasticsearch
|
||||
module Model
|
||||
VERSION = '6.1.0'
|
||||
end
|
||||
end
|
|
@ -1,119 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe Elasticsearch::Model::Adapter do
|
||||
|
||||
before(:all) do
|
||||
class ::DummyAdapterClass; end
|
||||
class ::DummyAdapterClassWithAdapter; end
|
||||
class ::DummyAdapter
|
||||
Records = Module.new
|
||||
Callbacks = Module.new
|
||||
Importing = Module.new
|
||||
end
|
||||
end
|
||||
|
||||
after(:all) do
|
||||
[DummyAdapterClassWithAdapter, DummyAdapterClass, DummyAdapter].each do |adapter|
|
||||
Elasticsearch::Model::Adapter::Adapter.adapters.delete(adapter)
|
||||
end
|
||||
remove_classes(DummyAdapterClass, DummyAdapterClassWithAdapter, DummyAdapter)
|
||||
end
|
||||
|
||||
describe '#from_class' do
|
||||
|
||||
it 'should return an Adapter instance' do
|
||||
expect(Elasticsearch::Model::Adapter.from_class(DummyAdapterClass)).to be_a(Elasticsearch::Model::Adapter::Adapter)
|
||||
end
|
||||
end
|
||||
|
||||
describe 'register' do
|
||||
|
||||
before do
|
||||
expect(Elasticsearch::Model::Adapter::Adapter).to receive(:register).and_call_original
|
||||
Elasticsearch::Model::Adapter.register(:foo, lambda { |c| false })
|
||||
end
|
||||
|
||||
it 'should register an adapter' do
|
||||
expect(Elasticsearch::Model::Adapter::Adapter.adapters[:foo]).to be_a(Proc)
|
||||
end
|
||||
|
||||
context 'when a specific adapter class is set' do
|
||||
|
||||
before do
|
||||
expect(Elasticsearch::Model::Adapter::Adapter).to receive(:register).and_call_original
|
||||
Elasticsearch::Model::Adapter::Adapter.register(DummyAdapter,
|
||||
lambda { |c| c == DummyAdapterClassWithAdapter })
|
||||
end
|
||||
|
||||
let(:adapter) do
|
||||
Elasticsearch::Model::Adapter::Adapter.new(DummyAdapterClassWithAdapter)
|
||||
end
|
||||
|
||||
it 'should register the adapter' do
|
||||
expect(adapter.adapter).to eq(DummyAdapter)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe 'default adapter' do
|
||||
|
||||
let(:adapter) do
|
||||
Elasticsearch::Model::Adapter::Adapter.new(DummyAdapterClass)
|
||||
end
|
||||
|
||||
it 'sets a default adapter' do
|
||||
expect(adapter.adapter).to eq(Elasticsearch::Model::Adapter::Default)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#records_mixin' do
|
||||
|
||||
before do
|
||||
Elasticsearch::Model::Adapter::Adapter.register(DummyAdapter,
|
||||
lambda { |c| c == DummyAdapterClassWithAdapter })
|
||||
|
||||
end
|
||||
|
||||
let(:adapter) do
|
||||
Elasticsearch::Model::Adapter::Adapter.new(DummyAdapterClassWithAdapter)
|
||||
end
|
||||
|
||||
it 'returns a Module' do
|
||||
expect(adapter.records_mixin).to be_a(Module)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#callbacks_mixin' do
|
||||
|
||||
before do
|
||||
Elasticsearch::Model::Adapter::Adapter.register(DummyAdapter,
|
||||
lambda { |c| c == DummyAdapterClassWithAdapter })
|
||||
|
||||
end
|
||||
|
||||
let(:adapter) do
|
||||
Elasticsearch::Model::Adapter::Adapter.new(DummyAdapterClassWithAdapter)
|
||||
end
|
||||
|
||||
it 'returns a Module' do
|
||||
expect(adapter.callbacks_mixin).to be_a(Module)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#importing_mixin' do
|
||||
|
||||
before do
|
||||
Elasticsearch::Model::Adapter::Adapter.register(DummyAdapter,
|
||||
lambda { |c| c == DummyAdapterClassWithAdapter })
|
||||
|
||||
end
|
||||
|
||||
let(:adapter) do
|
||||
Elasticsearch::Model::Adapter::Adapter.new(DummyAdapterClassWithAdapter)
|
||||
end
|
||||
|
||||
it 'returns a Module' do
|
||||
expect(adapter.importing_mixin).to be_a(Module)
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,334 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe 'Elasticsearch::Model::Adapter::ActiveRecord Associations' do
|
||||
|
||||
before(:all) do
|
||||
ActiveRecord::Schema.define(version: 1) do
|
||||
create_table :categories do |t|
|
||||
t.string :title
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
create_table :categories_posts do |t|
|
||||
t.references :post, :category
|
||||
end
|
||||
|
||||
create_table :authors do |t|
|
||||
t.string :first_name, :last_name
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
create_table :authorships do |t|
|
||||
t.string :first_name, :last_name
|
||||
t.references :post
|
||||
t.references :author
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
create_table :comments do |t|
|
||||
t.string :text
|
||||
t.string :author
|
||||
t.references :post
|
||||
t.timestamps null: false
|
||||
end
|
||||
|
||||
add_index(:comments, :post_id) unless index_exists?(:comments, :post_id)
|
||||
|
||||
create_table :posts do |t|
|
||||
t.string :title
|
||||
t.text :text
|
||||
t.boolean :published
|
||||
t.timestamps null: false
|
||||
end
|
||||
end
|
||||
|
||||
Comment.__send__ :include, Elasticsearch::Model
|
||||
Comment.__send__ :include, Elasticsearch::Model::Callbacks
|
||||
end
|
||||
|
||||
before do
|
||||
clear_tables(:categories, :categories_posts, :authors, :authorships, :comments, :posts)
|
||||
clear_indices(Post)
|
||||
Post.__elasticsearch__.create_index!(force: true)
|
||||
Comment.__elasticsearch__.create_index!(force: true)
|
||||
end
|
||||
|
||||
after do
|
||||
clear_tables(Post, Category)
|
||||
clear_indices(Post)
|
||||
end
|
||||
|
||||
context 'when a document is created' do
|
||||
|
||||
before do
|
||||
Post.create!(title: 'Test')
|
||||
Post.create!(title: 'Testing Coding')
|
||||
Post.create!(title: 'Coding')
|
||||
Post.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Post.search('title:test')
|
||||
end
|
||||
|
||||
it 'indexes the document' do
|
||||
expect(search_result.results.size).to eq(2)
|
||||
expect(search_result.results.first.title).to eq('Test')
|
||||
expect(search_result.records.size).to eq(2)
|
||||
expect(search_result.records.first.title).to eq('Test')
|
||||
end
|
||||
end
|
||||
|
||||
describe 'has_many_and_belongs_to association' do
|
||||
|
||||
context 'when an association is updated' do
|
||||
|
||||
before do
|
||||
post.categories = [category_a, category_b]
|
||||
Post.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:category_a) do
|
||||
Category.where(title: "One").first_or_create!
|
||||
end
|
||||
|
||||
let(:category_b) do
|
||||
Category.where(title: "Two").first_or_create!
|
||||
end
|
||||
|
||||
let(:post) do
|
||||
Post.create! title: "First Post", text: "This is the first post..."
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Post.search(query: {
|
||||
bool: {
|
||||
must: {
|
||||
multi_match: {
|
||||
fields: ['title'],
|
||||
query: 'first'
|
||||
}
|
||||
},
|
||||
filter: {
|
||||
terms: {
|
||||
categories: ['One']
|
||||
}
|
||||
}
|
||||
}
|
||||
} )
|
||||
end
|
||||
|
||||
it 'applies the update with' do
|
||||
expect(search_result.results.size).to eq(1)
|
||||
expect(search_result.results.first.title).to eq('First Post')
|
||||
expect(search_result.records.size).to eq(1)
|
||||
expect(search_result.records.first.title).to eq('First Post')
|
||||
end
|
||||
end
|
||||
|
||||
context 'when an association is deleted' do
|
||||
|
||||
before do
|
||||
post.categories = [category_a, category_b]
|
||||
post.categories = [category_b]
|
||||
Post.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:category_a) do
|
||||
Category.where(title: "One").first_or_create!
|
||||
end
|
||||
|
||||
let(:category_b) do
|
||||
Category.where(title: "Two").first_or_create!
|
||||
end
|
||||
|
||||
let(:post) do
|
||||
Post.create! title: "First Post", text: "This is the first post..."
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Post.search(query: {
|
||||
bool: {
|
||||
must: {
|
||||
multi_match: {
|
||||
fields: ['title'],
|
||||
query: 'first'
|
||||
}
|
||||
},
|
||||
filter: {
|
||||
terms: {
|
||||
categories: ['One']
|
||||
}
|
||||
}
|
||||
}
|
||||
} )
|
||||
end
|
||||
|
||||
it 'applies the update with a reindex' do
|
||||
expect(search_result.results.size).to eq(0)
|
||||
expect(search_result.records.size).to eq(0)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe 'has_many through association' do
|
||||
|
||||
context 'when the association is updated' do
|
||||
|
||||
before do
|
||||
author_a = Author.where(first_name: "John", last_name: "Smith").first_or_create!
|
||||
author_b = Author.where(first_name: "Mary", last_name: "Smith").first_or_create!
|
||||
author_c = Author.where(first_name: "Kobe", last_name: "Griss").first_or_create!
|
||||
|
||||
# Create posts
|
||||
post_1 = Post.create!(title: "First Post", text: "This is the first post...")
|
||||
post_2 = Post.create!(title: "Second Post", text: "This is the second post...")
|
||||
post_3 = Post.create!(title: "Third Post", text: "This is the third post...")
|
||||
|
||||
# Assign authors
|
||||
post_1.authors = [author_a, author_b]
|
||||
post_2.authors = [author_a]
|
||||
post_3.authors = [author_c]
|
||||
|
||||
Post.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
context 'if active record is at least 4' do
|
||||
|
||||
let(:search_result) do
|
||||
Post.search('authors.full_name:john')
|
||||
end
|
||||
|
||||
it 'applies the update', if: active_record_at_least_4? do
|
||||
expect(search_result.results.size).to eq(2)
|
||||
expect(search_result.records.size).to eq(2)
|
||||
end
|
||||
end
|
||||
|
||||
context 'if active record is less than 4' do
|
||||
|
||||
let(:search_result) do
|
||||
Post.search('authors.author.full_name:john')
|
||||
end
|
||||
|
||||
it 'applies the update', if: !active_record_at_least_4? do
|
||||
expect(search_result.results.size).to eq(2)
|
||||
expect(search_result.records.size).to eq(2)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
context 'when an association is added', if: active_record_at_least_4? do
|
||||
|
||||
before do
|
||||
author_a = Author.where(first_name: "John", last_name: "Smith").first_or_create!
|
||||
author_b = Author.where(first_name: "Mary", last_name: "Smith").first_or_create!
|
||||
|
||||
# Create posts
|
||||
post_1 = Post.create!(title: "First Post", text: "This is the first post...")
|
||||
|
||||
# Assign authors
|
||||
post_1.authors = [author_a]
|
||||
post_1.authors << author_b
|
||||
Post.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Post.search('authors.full_name:john')
|
||||
end
|
||||
|
||||
it 'adds the association' do
|
||||
expect(search_result.results.size).to eq(1)
|
||||
expect(search_result.records.size).to eq(1)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe 'has_many association' do
|
||||
|
||||
context 'when an association is added', if: active_record_at_least_4? do
|
||||
|
||||
before do
|
||||
# Create posts
|
||||
post_1 = Post.create!(title: "First Post", text: "This is the first post...")
|
||||
post_2 = Post.create!(title: "Second Post", text: "This is the second post...")
|
||||
|
||||
# Add comments
|
||||
post_1.comments.create!(author: 'John', text: 'Excellent')
|
||||
post_1.comments.create!(author: 'Abby', text: 'Good')
|
||||
|
||||
post_2.comments.create!(author: 'John', text: 'Terrible')
|
||||
|
||||
post_1.comments.create!(author: 'John', text: 'Or rather just good...')
|
||||
Post.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Post.search(query: {
|
||||
nested: {
|
||||
path: 'comments',
|
||||
query: {
|
||||
bool: {
|
||||
must: [
|
||||
{ match: { 'comments.author' => 'john' } },
|
||||
{ match: { 'comments.text' => 'good' } }
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
end
|
||||
|
||||
it 'adds the association' do
|
||||
expect(search_result.results.size).to eq(1)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe '#touch' do
|
||||
|
||||
context 'when a touch callback is defined on the model' do
|
||||
|
||||
before do
|
||||
# Create categories
|
||||
category_a = Category.where(title: "One").first_or_create!
|
||||
|
||||
# Create post
|
||||
post = Post.create!(title: "First Post", text: "This is the first post...")
|
||||
|
||||
# Assign category
|
||||
post.categories << category_a
|
||||
category_a.update_attribute(:title, "Updated")
|
||||
category_a.posts.each { |p| p.touch }
|
||||
|
||||
Post.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'executes the callback after #touch' do
|
||||
expect(Post.search('categories:One').size).to eq(0)
|
||||
expect(Post.search('categories:Updated').size).to eq(1)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe '#includes' do
|
||||
|
||||
before do
|
||||
post_1 = Post.create(title: 'One')
|
||||
post_2 = Post.create(title: 'Two')
|
||||
post_1.comments.create(text: 'First comment')
|
||||
post_2.comments.create(text: 'Second comment')
|
||||
|
||||
Comment.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Comment.search('first').records(includes: :post)
|
||||
end
|
||||
|
||||
it 'eager loads associations' do
|
||||
expect(search_result.first.association(:post)).to be_loaded
|
||||
expect(search_result.first.post.title).to eq('One')
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,340 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe Elasticsearch::Model::Adapter::ActiveRecord do
|
||||
|
||||
before(:all) do
|
||||
ActiveRecord::Schema.define(:version => 1) do
|
||||
create_table :articles do |t|
|
||||
t.string :title
|
||||
t.string :body
|
||||
t.integer :clicks, :default => 0
|
||||
t.datetime :created_at, :default => 'NOW()'
|
||||
end
|
||||
end
|
||||
|
||||
Article.delete_all
|
||||
Article.__elasticsearch__.create_index!(force: true)
|
||||
|
||||
Article.create!(title: 'Test', body: '', clicks: 1)
|
||||
Article.create!(title: 'Testing Coding', body: '', clicks: 2)
|
||||
Article.create!(title: 'Coding', body: '', clicks: 3)
|
||||
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
describe 'indexing a document' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'allows searching for documents' do
|
||||
expect(search_result.results.size).to be(2)
|
||||
expect(search_result.records.size).to be(2)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#results' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'returns an instance of Response::Result' do
|
||||
expect(search_result.results.first).to be_a(Elasticsearch::Model::Response::Result)
|
||||
end
|
||||
|
||||
it 'prooperly loads the document' do
|
||||
expect(search_result.results.first.title).to eq('Test')
|
||||
end
|
||||
|
||||
context 'when the result contains other data' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search(query: { match: { title: 'test' } }, highlight: { fields: { title: {} } })
|
||||
end
|
||||
|
||||
it 'allows access to the Elasticsearch result' do
|
||||
expect(search_result.results.first.title).to eq('Test')
|
||||
expect(search_result.results.first.title?).to be(true)
|
||||
expect(search_result.results.first.boo?).to be(false)
|
||||
expect(search_result.results.first.highlight?).to be(true)
|
||||
expect(search_result.results.first.highlight.title?).to be(true)
|
||||
expect(search_result.results.first.highlight.boo?).to be(false)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe '#records' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'returns an instance of the model' do
|
||||
expect(search_result.records.first).to be_a(Article)
|
||||
end
|
||||
|
||||
it 'prooperly loads the document' do
|
||||
expect(search_result.records.first.title).to eq('Test')
|
||||
end
|
||||
end
|
||||
|
||||
describe 'Enumerable' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'allows iteration over results' do
|
||||
expect(search_result.results.map(&:_id)).to eq(['1', '2'])
|
||||
end
|
||||
|
||||
it 'allows iteration over records' do
|
||||
expect(search_result.records.map(&:id)).to eq([1, 2])
|
||||
end
|
||||
end
|
||||
|
||||
describe '#id' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'returns the id' do
|
||||
expect(search_result.results.first.id).to eq('1')
|
||||
end
|
||||
end
|
||||
|
||||
describe '#id' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'returns the type' do
|
||||
expect(search_result.results.first.type).to eq('article')
|
||||
end
|
||||
end
|
||||
|
||||
describe '#each_with_hit' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'returns the record with the Elasticsearch hit' do
|
||||
search_result.records.each_with_hit do |r, h|
|
||||
expect(h._score).not_to be_nil
|
||||
expect(h._source.title).not_to be_nil
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe 'search results order' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search(query: { match: { title: 'code' }}, sort: { clicks: :desc })
|
||||
end
|
||||
|
||||
it 'preserves the search results order when accessing a single record' do
|
||||
expect(search_result.records[0].clicks).to be(3)
|
||||
expect(search_result.records[1].clicks).to be(2)
|
||||
expect(search_result.records.first).to eq(search_result.records[0])
|
||||
end
|
||||
|
||||
it 'preserves the search results order for the list of records' do
|
||||
search_result.records.each_with_hit do |r, h|
|
||||
expect(r.id.to_s).to eq(h._id)
|
||||
end
|
||||
|
||||
search_result.records.map_with_hit do |r, h|
|
||||
expect(r.id.to_s).to eq(h._id)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe 'a paged collection' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search(query: { match: { title: { query: 'test' } } },
|
||||
size: 2,
|
||||
from: 1)
|
||||
end
|
||||
|
||||
it 'applies the paged options to the search' do
|
||||
expect(search_result.results.size).to eq(1)
|
||||
expect(search_result.results.first.title).to eq('Testing Coding')
|
||||
expect(search_result.records.size).to eq(1)
|
||||
expect(search_result.records.first.title).to eq('Testing Coding')
|
||||
end
|
||||
end
|
||||
|
||||
describe '#destroy' do
|
||||
|
||||
before do
|
||||
Article.create!(title: 'destroy', body: '', clicks: 1)
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
Article.where(title: 'destroy').first.destroy
|
||||
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:test')
|
||||
end
|
||||
|
||||
it 'removes the document from the index' do
|
||||
expect(Article.count).to eq(3)
|
||||
expect(search_result.results.size).to eq(2)
|
||||
expect(search_result.records.size).to eq(2)
|
||||
end
|
||||
end
|
||||
|
||||
describe 'full document updates' do
|
||||
|
||||
before do
|
||||
article = Article.create!(title: 'update', body: '', clicks: 1)
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
article.title = 'Writing'
|
||||
article.save
|
||||
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:write')
|
||||
end
|
||||
|
||||
it 'applies the update' do
|
||||
expect(search_result.results.size).to eq(1)
|
||||
expect(search_result.records.size).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
describe 'attribute updates' do
|
||||
|
||||
before do
|
||||
article = Article.create!(title: 'update', body: '', clicks: 1)
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
article.title = 'special'
|
||||
article.save
|
||||
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('title:special')
|
||||
end
|
||||
|
||||
it 'applies the update' do
|
||||
expect(search_result.results.size).to eq(1)
|
||||
expect(search_result.records.size).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#save' do
|
||||
|
||||
before do
|
||||
article = Article.create!(title: 'save', body: '', clicks: 1)
|
||||
|
||||
ActiveRecord::Base.transaction do
|
||||
article.body = 'dummy'
|
||||
article.save
|
||||
|
||||
article.title = 'special'
|
||||
article.save
|
||||
end
|
||||
|
||||
article.__elasticsearch__.update_document
|
||||
Article.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
let(:search_result) do
|
||||
Article.search('body:dummy')
|
||||
end
|
||||
|
||||
it 'applies the save' do
|
||||
expect(search_result.results.size).to eq(1)
|
||||
expect(search_result.records.size).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
describe 'a DSL search' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search(query: { match: { title: { query: 'test' } } })
|
||||
end
|
||||
|
||||
it 'returns the results' do
|
||||
expect(search_result.results.size).to eq(2)
|
||||
expect(search_result.records.size).to eq(2)
|
||||
end
|
||||
end
|
||||
|
||||
describe 'chaining SQL queries on response.records' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search(query: { match: { title: { query: 'test' } } })
|
||||
end
|
||||
|
||||
it 'executes the SQL request with the chained query criteria' do
|
||||
expect(search_result.records.size).to eq(2)
|
||||
expect(search_result.records.where(title: 'Test').size).to eq(1)
|
||||
expect(search_result.records.where(title: 'Test').first.title).to eq('Test')
|
||||
end
|
||||
end
|
||||
|
||||
describe 'ordering of SQL queries' do
|
||||
|
||||
context 'when order is called on the ActiveRecord query' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search query: { match: { title: { query: 'test' } } }
|
||||
end
|
||||
|
||||
it 'allows the SQL query to be ordered independent of the Elasticsearch results order', unless: active_record_at_least_4? do
|
||||
expect(search_result.records.order('title DESC').first.title).to eq('Testing Coding')
|
||||
expect(search_result.records.order('title DESC')[0].title).to eq('Testing Coding')
|
||||
end
|
||||
|
||||
it 'allows the SQL query to be ordered independent of the Elasticsearch results order', if: active_record_at_least_4? do
|
||||
expect(search_result.records.order(title: :desc).first.title).to eq('Testing Coding')
|
||||
expect(search_result.records.order(title: :desc)[0].title).to eq('Testing Coding')
|
||||
end
|
||||
end
|
||||
|
||||
context 'when more methods are chained on the ActiveRecord query' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search query: {match: {title: {query: 'test'}}}
|
||||
end
|
||||
|
||||
it 'allows the SQL query to be ordered independent of the Elasticsearch results order', if: active_record_at_least_4? do
|
||||
expect(search_result.records.distinct.order(title: :desc).first.title).to eq('Testing Coding')
|
||||
expect(search_result.records.distinct.order(title: :desc)[0].title).to eq('Testing Coding')
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe 'access to the response via methods' do
|
||||
|
||||
let(:search_result) do
|
||||
Article.search(query: { match: { title: { query: 'test' } } },
|
||||
aggregations: {
|
||||
dates: { date_histogram: { field: 'created_at', interval: 'hour' } },
|
||||
clicks: { global: {}, aggregations: { min: { min: { field: 'clicks' } } } }
|
||||
},
|
||||
suggest: { text: 'tezt', title: { term: { field: 'title', suggest_mode: 'always' } } })
|
||||
end
|
||||
|
||||
it 'allows document keys to be access via methods' do
|
||||
expect(search_result.aggregations.dates.buckets.first.doc_count).to eq(2)
|
||||
expect(search_result.aggregations.clicks.doc_count).to eq(6)
|
||||
expect(search_result.aggregations.clicks.min.value).to eq(1.0)
|
||||
expect(search_result.aggregations.clicks.max).to be_nil
|
||||
expect(search_result.suggestions.title.first.options.size).to eq(1)
|
||||
expect(search_result.suggestions.terms).to eq(['test'])
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,18 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe 'Elasticsearch::Model::Adapter::ActiveRecord Dynamic Index naming' do
|
||||
|
||||
before do
|
||||
ArticleWithDynamicIndexName.counter = 0
|
||||
end
|
||||
|
||||
it 'exavlues the index_name value' do
|
||||
expect(ArticleWithDynamicIndexName.index_name).to eq('articles-1')
|
||||
end
|
||||
|
||||
it 'revaluates the index name with each call' do
|
||||
expect(ArticleWithDynamicIndexName.index_name).to eq('articles-1')
|
||||
expect(ArticleWithDynamicIndexName.index_name).to eq('articles-2')
|
||||
expect(ArticleWithDynamicIndexName.index_name).to eq('articles-3')
|
||||
end
|
||||
end
|
|
@ -1,187 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe 'Elasticsearch::Model::Adapter::ActiveRecord Importing' do
|
||||
|
||||
before(:all) do
|
||||
ActiveRecord::Schema.define(:version => 1) do
|
||||
create_table :import_articles do |t|
|
||||
t.string :title
|
||||
t.integer :views
|
||||
t.string :numeric # For the sake of invalid data sent to Elasticsearch
|
||||
t.datetime :created_at, :default => 'NOW()'
|
||||
end
|
||||
end
|
||||
|
||||
ImportArticle.delete_all
|
||||
ImportArticle.__elasticsearch__.client.cluster.health(wait_for_status: 'yellow')
|
||||
end
|
||||
|
||||
before do
|
||||
ImportArticle.__elasticsearch__.create_index!
|
||||
end
|
||||
|
||||
after do
|
||||
clear_indices(ImportArticle)
|
||||
clear_tables(ImportArticle)
|
||||
end
|
||||
|
||||
describe '#import' do
|
||||
|
||||
context 'when no search criteria is specified' do
|
||||
|
||||
before do
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
ImportArticle.import
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'imports all documents' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(10)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when batch size is specified' do
|
||||
|
||||
before do
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
end
|
||||
|
||||
let!(:batch_count) do
|
||||
batches = 0
|
||||
errors = ImportArticle.import(batch_size: 5) do |response|
|
||||
batches += 1
|
||||
end
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
batches
|
||||
end
|
||||
|
||||
it 'imports using the batch size' do
|
||||
expect(batch_count).to eq(2)
|
||||
end
|
||||
|
||||
it 'imports all the documents' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(10)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when a scope is specified' do
|
||||
|
||||
before do
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
ImportArticle.import(scope: 'popular', force: true)
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'applies the scope' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(5)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when a query is specified' do
|
||||
|
||||
before do
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
ImportArticle.import(query: -> { where('views >= 3') })
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'applies the query' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(7)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when there are invalid documents' do
|
||||
|
||||
let!(:result) do
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
new_article
|
||||
batches = 0
|
||||
errors = ImportArticle.__elasticsearch__.import(batch_size: 5) do |response|
|
||||
batches += 1
|
||||
end
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
{ batch_size: batches, errors: errors}
|
||||
end
|
||||
|
||||
let(:new_article) do
|
||||
ImportArticle.create!(title: "Test INVALID", numeric: "INVALID")
|
||||
end
|
||||
|
||||
it 'does not import them' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(10)
|
||||
expect(result[:batch_size]).to eq(3)
|
||||
expect(result[:errors]).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when a transform proc is specified' do
|
||||
|
||||
before do
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
ImportArticle.import( transform: ->(a) {{ index: { data: { name: a.title, foo: 'BAR' } }}} )
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'transforms the documents' do
|
||||
expect(ImportArticle.search('*').results.first._source.keys).to include('name')
|
||||
expect(ImportArticle.search('*').results.first._source.keys).to include('foo')
|
||||
end
|
||||
|
||||
it 'imports all documents' do
|
||||
expect(ImportArticle.search('test').results.total).to eq(10)
|
||||
expect(ImportArticle.search('bar').results.total).to eq(10)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when the model has a default scope' do
|
||||
|
||||
around(:all) do |example|
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
ImportArticle.instance_eval { default_scope { where('views > 3') } }
|
||||
example.run
|
||||
ImportArticle.default_scopes.pop
|
||||
end
|
||||
|
||||
before do
|
||||
ImportArticle.__elasticsearch__.import
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'uses the default scope' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(6)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when there is a default scope and a query specified' do
|
||||
|
||||
around(:all) do |example|
|
||||
10.times { |i| ImportArticle.create! title: 'Test', views: "#{i}" }
|
||||
ImportArticle.instance_eval { default_scope { where('views > 3') } }
|
||||
example.run
|
||||
ImportArticle.default_scopes.pop
|
||||
end
|
||||
|
||||
before do
|
||||
ImportArticle.import(query: -> { where('views <= 4') })
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'combines the query and the default scope' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when the batch is empty' do
|
||||
|
||||
before do
|
||||
ImportArticle.delete_all
|
||||
ImportArticle.import
|
||||
ImportArticle.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'does not make any requests to create documents' do
|
||||
expect(ImportArticle.search('*').results.total).to eq(0)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,110 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe 'Elasticsearch::Model::Adapter::ActiveRecord MultiModel' do
|
||||
|
||||
before(:all) do
|
||||
ActiveRecord::Schema.define do
|
||||
create_table Episode.table_name do |t|
|
||||
t.string :name
|
||||
t.datetime :created_at, :default => 'NOW()'
|
||||
end
|
||||
|
||||
create_table Series.table_name do |t|
|
||||
t.string :name
|
||||
t.datetime :created_at, :default => 'NOW()'
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
before do
|
||||
models = [ Episode, Series ]
|
||||
clear_tables(models)
|
||||
models.each do |model|
|
||||
model.__elasticsearch__.create_index! force: true
|
||||
model.create name: "The #{model.name}"
|
||||
model.create name: "A great #{model.name}"
|
||||
model.create name: "The greatest #{model.name}"
|
||||
model.__elasticsearch__.refresh_index!
|
||||
end
|
||||
end
|
||||
|
||||
after do
|
||||
clear_indices(Episode, Series)
|
||||
clear_tables(Episode, Series)
|
||||
end
|
||||
|
||||
context 'when the search is across multimodels' do
|
||||
|
||||
let(:search_result) do
|
||||
Elasticsearch::Model.search(%q<"The greatest Episode"^2 OR "The greatest Series">, [Series, Episode])
|
||||
end
|
||||
|
||||
it 'executes the search across models' do
|
||||
expect(search_result.results.size).to eq(2)
|
||||
expect(search_result.records.size).to eq(2)
|
||||
end
|
||||
|
||||
describe '#results' do
|
||||
|
||||
it 'returns an instance of Elasticsearch::Model::Response::Result' do
|
||||
expect(search_result.results[0]).to be_a(Elasticsearch::Model::Response::Result)
|
||||
expect(search_result.results[1]).to be_a(Elasticsearch::Model::Response::Result)
|
||||
end
|
||||
|
||||
it 'returns the correct model instance' do
|
||||
expect(search_result.results[0].name).to eq('The greatest Episode')
|
||||
expect(search_result.results[1].name).to eq('The greatest Series')
|
||||
end
|
||||
|
||||
it 'provides access to the results' do
|
||||
expect(search_result.results[0].name).to eq('The greatest Episode')
|
||||
expect(search_result.results[0].name?).to be(true)
|
||||
expect(search_result.results[0].boo?).to be(false)
|
||||
|
||||
expect(search_result.results[1].name).to eq('The greatest Series')
|
||||
expect(search_result.results[1].name?).to be(true)
|
||||
expect(search_result.results[1].boo?).to be(false)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#records' do
|
||||
|
||||
it 'returns an instance of Elasticsearch::Model::Response::Result' do
|
||||
expect(search_result.records[0]).to be_a(Episode)
|
||||
expect(search_result.records[1]).to be_a(Series)
|
||||
end
|
||||
|
||||
it 'returns the correct model instance' do
|
||||
expect(search_result.records[0].name).to eq('The greatest Episode')
|
||||
expect(search_result.records[1].name).to eq('The greatest Series')
|
||||
end
|
||||
|
||||
context 'when the data store is changed' do
|
||||
|
||||
before do
|
||||
Series.find_by_name("The greatest Series").delete
|
||||
Series.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
it 'only returns matching records' do
|
||||
expect(search_result.results.size).to eq(2)
|
||||
expect(search_result.records.size).to eq(1 )
|
||||
expect(search_result.records[0].name).to eq('The greatest Episode')
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
describe 'pagination' do
|
||||
|
||||
let(:search_result) do
|
||||
Elasticsearch::Model.search('series OR episode', [Series, Episode])
|
||||
end
|
||||
|
||||
it 'properly paginates the results' do
|
||||
expect(search_result.page(1).per(3).results.size).to eq(3)
|
||||
expect(search_result.page(2).per(3).results.size).to eq(3)
|
||||
expect(search_result.page(3).per(3).results.size).to eq(0)
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,38 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe 'Elasticsearch::Model::Adapter::ActiveRecord Namespaced Model' do
|
||||
|
||||
before(:all) do
|
||||
ActiveRecord::Schema.define(:version => 1) do
|
||||
create_table :books do |t|
|
||||
t.string :title
|
||||
end
|
||||
end
|
||||
|
||||
MyNamespace::Book.delete_all
|
||||
MyNamespace::Book.__elasticsearch__.create_index!(force: true)
|
||||
MyNamespace::Book.create!(title: 'Test')
|
||||
MyNamespace::Book.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
after do
|
||||
clear_indices(MyNamespace::Book)
|
||||
clear_tables(MyNamespace::Book)
|
||||
end
|
||||
|
||||
context 'when the model is namespaced' do
|
||||
|
||||
it 'has the proper index name' do
|
||||
expect(MyNamespace::Book.index_name).to eq('my_namespace-books')
|
||||
end
|
||||
|
||||
it 'has the proper document type' do
|
||||
expect(MyNamespace::Book.document_type).to eq('book')
|
||||
end
|
||||
|
||||
it 'saves the document into the index' do
|
||||
expect(MyNamespace::Book.search('title:test').results.size).to eq(1)
|
||||
expect(MyNamespace::Book.search('title:test').results.first.title).to eq('Test')
|
||||
end
|
||||
end
|
||||
end
|
|
@ -1,315 +0,0 @@
|
|||
require 'spec_helper'
|
||||
|
||||
describe 'Elasticsearch::Model::Adapter::ActiveRecord Pagination' do
|
||||
|
||||
before(:all) do
|
||||
ActiveRecord::Schema.define(:version => 1) do
|
||||
create_table ArticleForPagination.table_name do |t|
|
||||
t.string :title
|
||||
t.datetime :created_at, :default => 'NOW()'
|
||||
t.boolean :published
|
||||
end
|
||||
end
|
||||
|
||||
Kaminari::Hooks.init if defined?(Kaminari::Hooks)
|
||||
|
||||
ArticleForPagination.__elasticsearch__.create_index! force: true
|
||||
|
||||
68.times do |i|
|
||||
ArticleForPagination.create! title: "Test #{i}", published: (i % 2 == 0)
|
||||
end
|
||||
|
||||
ArticleForPagination.import
|
||||
ArticleForPagination.__elasticsearch__.refresh_index!
|
||||
end
|
||||
|
||||
context 'when no other page is specified' do
|
||||
|
||||
let(:records) do
|
||||
ArticleForPagination.search('title:test').page(1).records
|
||||
end
|
||||
|
||||
describe '#size' do
|
||||
|
||||
it 'returns the correct size' do
|
||||
expect(records.size).to eq(25)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#current_page' do
|
||||
|
||||
it 'returns the correct current page' do
|
||||
expect(records.current_page).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#prev_page' do
|
||||
|
||||
it 'returns the correct previous page' do
|
||||
expect(records.prev_page).to be_nil
|
||||
end
|
||||
end
|
||||
|
||||
describe '#next_page' do
|
||||
|
||||
it 'returns the correct next page' do
|
||||
expect(records.next_page).to eq(2)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#total_pages' do
|
||||
|
||||
it 'returns the correct total pages' do
|
||||
expect(records.total_pages).to eq(3)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#first_page?' do
|
||||
|
||||
it 'returns the correct first page' do
|
||||
expect(records.first_page?).to be(true)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#last_page?' do
|
||||
|
||||
it 'returns the correct last page' do
|
||||
expect(records.last_page?).to be(false)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#out_of_range?' do
|
||||
|
||||
it 'returns whether the pagination is out of range' do
|
||||
expect(records.out_of_range?).to be(false)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
context 'when a specific page is specified' do
|
||||
|
||||
let(:records) do
|
||||
ArticleForPagination.search('title:test').page(2).records
|
||||
end
|
||||
|
||||
describe '#size' do
|
||||
|
||||
it 'returns the correct size' do
|
||||
expect(records.size).to eq(25)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#current_page' do
|
||||
|
||||
it 'returns the correct current page' do
|
||||
expect(records.current_page).to eq(2)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#prev_page' do
|
||||
|
||||
it 'returns the correct previous page' do
|
||||
expect(records.prev_page).to eq(1)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#next_page' do
|
||||
|
||||
it 'returns the correct next page' do
|
||||
expect(records.next_page).to eq(3)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#total_pages' do
|
||||
|
||||
it 'returns the correct total pages' do
|
||||
expect(records.total_pages).to eq(3)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#first_page?' do
|
||||
|
||||
it 'returns the correct first page' do
|
||||
expect(records.first_page?).to be(false)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#last_page?' do
|
||||
|
||||
it 'returns the correct last page' do
|
||||
expect(records.last_page?).to be(false)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#out_of_range?' do
|
||||
|
||||
it 'returns whether the pagination is out of range' do
|
||||
expect(records.out_of_range?).to be(false)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
context 'when a the last page is specified' do
|
||||
|
||||
let(:records) do
|
||||
ArticleForPagination.search('title:test').page(3).records
|
||||
end
|
||||
|
||||
describe '#size' do
|
||||
|
||||
it 'returns the correct size' do
|
||||
expect(records.size).to eq(18)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#current_page' do
|
||||
|
||||
it 'returns the correct current page' do
|
||||
expect(records.current_page).to eq(3)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#prev_page' do
|
||||
|
||||
it 'returns the correct previous page' do
|
||||
expect(records.prev_page).to eq(2)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#next_page' do
|
||||
|
||||
it 'returns the correct next page' do
|
||||
expect(records.next_page).to be_nil
|
||||
end
|
||||
end
|
||||
|
||||
describe '#total_pages' do
|
||||
|
||||
it 'returns the correct total pages' do
|
||||
expect(records.total_pages).to eq(3)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#first_page?' do
|
||||
|
||||
it 'returns the correct first page' do
|
||||
expect(records.first_page?).to be(false)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#last_page?' do
|
||||
|
||||
it 'returns the correct last page' do
|
||||
expect(records.last_page?).to be(true)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#out_of_range?' do
|
||||
|
||||
it 'returns whether the pagination is out of range' do
|
||||
expect(records.out_of_range?).to be(false)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
context 'when an invalid page is specified' do
|
||||
|
||||
let(:records) do
|
||||
ArticleForPagination.search('title:test').page(6).records
|
||||
end
|
||||
|
||||
describe '#size' do
|
||||
|
||||
it 'returns the correct size' do
|
||||
expect(records.size).to eq(0)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#current_page' do
|
||||
|
||||
it 'returns the correct current page' do
|
||||
expect(records.current_page).to eq(6)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#next_page' do
|
||||
|
||||
it 'returns the correct next page' do
|
||||
expect(records.next_page).to be_nil
|
||||
end
|
||||
end
|
||||
|
||||
describe '#total_pages' do
|
||||
|
||||
it 'returns the correct total pages' do
|
||||
expect(records.total_pages).to eq(3)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#first_page?' do
|
||||
|
||||
it 'returns the correct first page' do
|
||||
expect(records.first_page?).to be(false)
|
||||
end
|
||||
end
|
||||
|
||||
describe '#last_page?' do
|
||||
|
||||
it 'returns whether it is the last page', if: !(Kaminari::VERSION < '1') do
|
||||
expect(records.last_page?).to be(false)
|
||||
end
|
||||
|
||||
it 'returns whether it is the last page', if: Kaminari::VERSION < '1' do
|
||||
expect(records.last_page?).to be(true) # Kaminari returns current_page >= total_pages in version < 1.0
|
||||
end
|
||||
end
|
||||
|
||||
describe '#out_of_range?' do
|
||||
|
||||
it 'returns whether the pagination is out of range' do
|
||||
expect(records.out_of_range?).to be(true)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
context 'when a scope is also specified' do
|
||||
|
||||
let(:records) do
|
||||
ArticleForPagination.search('title:test').page(2).records.published
|
||||
end
|
||||
|
||||
describe '#size' do
|
||||
|
||||
it 'returns the correct size' do
|
||||
expect(records.size).to eq(12)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
context 'when a sorting is specified' do
|
||||
|
||||
let(:search) do
|
||||
ArticleForPagination.search({ query: { match: { title: 'test' } }, sort: [ { id: 'desc' } ] })
|
||||
end
|
||||
|
||||
it 'applies the sort' do
|
||||
expect(search.page(2).records.first.id).to eq(43)
|
||||
expect(search.page(3).records.first.id).to eq(18)
|
||||
expect(search.page(2).per(5).records.first.id).to eq(63)
|
||||
end
|
||||
end
|
||||
|
||||
context 'when the model has a specific default per page set' do
|
||||
|
||||
around do |example|
|
||||
original_default = ArticleForPagination.instance_variable_get(:@_default_per_page)
|
||||
ArticleForPagination.paginates_per 50
|
||||
example.run
|
||||
ArticleForPagination.paginates_per original_default
|
||||
end
|
||||
|
||||
it 'uses the default per page setting' do
|
||||
expect(ArticleForPagination.search('*').page(1).records.size).to eq(50)
|
||||
end
|
||||
end
|
||||
end
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue